Lab Manual: Jaipur Engineering College and Research Center, Jaipur

Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 36

LAB MANUAL

Lab Name : Software Engineering

Lab Code : 3CS4-23

Branch : Computer Science and Engineering Year

: 2nd Year

Jaipur Engineering College and Research Center, Jaipur


Department of Computer Science and Engineering
(Rajasthan Technical University, KOTA)
INDEX
S.NO CONTENTS CO PAGE
NO.

1 VISION/MISION
2. PEO
3. POS
4. COS
5. MAPPING OF CO & PO
6. SYLLABUS
7. BOOKS
8. INSTRUCTIONAL METHODS
9. LEARNING MATERIALS
10. ASSESSMENT OF OUTCOMES
LIST OF EXPERIMENTS (RTU SYLLABUS)

Exp:- 1 Objectives: - 1. Develop requirements specification for a given


problem.

Exp:- 2 Objectives: - 2. Develop DFD model (level-0, level-1 DFD and Data
dictionary) of the project.

Exp:-3 Objectives: - 3. Develop Structured design for the DFD model


developed.

Exp:-4 Objectives: - 4.Develop UML Use case model for a problem.

Exp:-5 Objectives: - 5.Develop sequence diagram.

Exp:-6 Objectives: - 6.Develop Class diagrams

Exp:-7 Objectives: - 7.Use testing tool such as Junit.


Exp:-8 Objectives: - 8.Using one project management tool -Libra

Weather Forecasting Using Data Mining

SOFTWARE ENGINEERING LAB


PROJECT FILE

Submitted By:-
Anurag Sharma

Arin Mangal

Aryan Khandelwal

Under the guidance of :


Prof. Sunit

Jaipur Engineering College and Research Centre


Jaipur, Rajasthan(302022)
Certificate
This is to certify that the project entitled, ‘Weather Forecasting Using Data Mining' submitted by Anurag
Sharma, Arin Mangal and Aryan Khandelwal is an authentic work carried out by them under my
supervision and guidance for the fulfillment of the requirements for the software engineering lab (2019-
20).

To the best of my knowledge, the matter embodied in the project is true and accurate.

Date – 14/10/2019
JECRC Foundation

(Prof. Suniti)
Dept. of Computer Science and Engineering
Abstract
The weather conditions are changing continuously and the entire world is suffers from the changing
Clemet and their side effects. Therefore pattern on changing weather conditions are required to
observe. With this aim the proposed work is intended to investigate about the weather condition
pattern and their forecasting model. On the other hand data mining technique enables us to analyse the
data and extract the valuable patterns from the data. Therefore in order to understand fluctuating
patterns of the weather conditions the data mining based predictive model is reported in this work. The
proposed data model analyse the historical weather data and identify the significant on the data. These
identified patterns from the historical data enable us to approximate the upcoming weather conditions
and their outcomes. To design and develop such an accurate data model a number of techniques are
reviewed and most promising approaches are collected. Thus the proposed data model incorporates the
Hidden Markov Model for prediction and for extraction of the weather condition observations the K-
means clustering is used. For predicting the new or upcoming conditions the system need to accept the
current scenarios of weather conditions. The implementation of the proposed technique is performed
on the JAVA technology. Additionally for justification of the proposed model the comparative study with
the traditional ID3 algorithm is used. To compare both the techniques the accuracy, error rate and the
time and space complexity is estimated as the performance parameters. According to the obtained
results the performance of the proposed technique is found enhanced as compared to available ID3
based technique.
Acknowledgments

We express our profound gratitude and indebtedness to Prof. Suniti, Department of Computer Science
and Engineering, JECRC Foundation, Jaipur for introducing the present topic and for their inspiring
intellectual guidance, constructive criticism and valuable suggestion throughout the project work.

We are also thankful to other staffs in Department of Computer Science and Engineering for motivating
us in improving the project.
Finally we would like to thank our parents for their support to complete this project.

Date – 14/10/2019
JECRC Foundation

Anurag Sharma

Arin Mangal

Aryan Khandelwal
JAIPUR ENGINEERING COLLEGE AND RESEARCH CENTER

Department of Computer Science and Engineering

Branch: Computer Science and Engineering Semester: 3rd

Course Name: Software Engineering Code: 3CS4-23

External Marks: 45 Practical hrs: 3 hr/week

Internal Marks: 30 Total Marks: 75

1. VISION & MISSION


VISION: To become renowned Centre of excellence in computer science and engineering and make
competent engineers & professionals with high ethical values prepared for lifelong learning.

MISSION:

M1: To impart outcome based education for emerging technologies in the field of computer science and
engineering.
M2: To provide opportunities for interaction between academia and industry.
M3: To provide platform for lifelong learning by accepting the change in technologies
M4: To develop aptitude of fulfilling social responsibilities

PEO

1..To provide students with the fundamentals of Engineering Sciences with more emphasis in
Computer Science &Engineering by way of analyzing and exploiting engineering challenges. 2.To train
students with good scientific and engineering knowledge so as to comprehend, analyze, design, and
create novel products and solutions for the real life problems.
3.To inculcate professional and ethical attitude, effective communication skills, teamwork skills,
multidisciplinary approach, entrepreneurial thinking and an ability to relate engineering issues with
social issues.
4.To provide students with an academic environment aware of excellence, leadership, written ethical
codes and guidelines, and the self motivated life-long learning needed for a successful professional
career.
5. To prepare students to excel in Industry and Higher education by Educating Students along with High
moral values and Knowledge
2. PROGRAM OUTCOMES
Engineering Knowledge: Apply the knowledge of mathematics, science, engineering fundamentals, and
an engineering specialization to the solution of complex engineering problems in IT.
Problem analysis: Identify, formulate, research literature, and analyze complex engineering problems
reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences in IT.
Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental considerations using IT.
Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions using IT.
Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with an
understanding of the limitations in IT.
The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice using IT.
Environment and sustainability:Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development in IT.
Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice using IT.
Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings in IT.
Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and receive clear instructions.
Project Management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to
manage IT projects and in multidisciplinary environments.
Life –long Learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological changes needed in IT.
3. MAPPING OF PEOs & POs

PROGRAM PROGRAM OUTCOMES


OBJECTIVE
S 1 2 3 4 5 6 7 8 9 10 11 12

I H L H
II M H M H H L H

III L H M H L M
IV L M H M H M

V M M

4.COURSE OUTCOME
CO1: Create and specify a software design based on the requirement specification that
the software can be implemented based on the design.

CO2: To develop, design and Implement structured DFD and UML Class diagram.

5. MAPPING OF CO & PO

COURSE PROGRAM OUTCOMES


OUTCOMES
1 2 3 4 5 6 7 8 9 10 11 12
I H H H H L L L L H M H H

II H H H H H L L L H M H M

INSTRUCTIONS OF LAB

DO’s
 Please switch off the Mobile/Cell phone before entering Lab.
 Enter the Lab with complete source code and data.
 Check whether all peripheral are available at your desktop before proceeding for program.
 Intimate the lab In charge whenever you are incompatible in using the system or in case
software get corrupted/ infected by virus.
 Arrange all the peripheral and seats before leaving the lab.
 Properly shutdown the system before leaving the lab.
 Keep the bag outside in the racks.
 Enter the lab on time and leave at proper time.
 Maintain the decorum of the lab.
 Utilize lab hours in the corresponding experiment.
 Get your Cd / Pen drive checked by lab In charge before using it in the lab.

DON’TS
 No one is allowed to bring storage devices like Pan Drive /Floppy etc. in the lab.
 Don’t mishandle the system.
 Don’t leave the system on standing for long
 Don’t bring any external material in the lab.
 Don’t make noise in the lab.
 Don’t bring the mobile in the lab. If extremely necessary then keep ringers off.
 Don’t enter in the lab without permission of lab Incharge.
 Don’t litter in the lab.
 Don’t delete or make any modification in system files.
 Don’t carry any lab equipments outside the lab.
 We need your full support and cooperation for smooth functioning of the projec
INSTRUCTIONS FOR STUDENT

BEFORE ENTERING IN THE LAB

 All the students are supposed to prepare the theory regarding the next program.
 Students are supposed to bring the practical file and the lab copy.
 Previous programs should be written in the practical file.
 Any student not following these instructions will be denied entry in the lab.

WHILE WORKING IN THE LAB

 Adhere to experimental schedule as instructed by the lab incharge.


 Get the previously executed program signed by the instructor.
 Get the output of the current program checked by the instructor in the lab copy.
 Each student should work on his/her assigned computer at each turn of the lab.
 Take responsibility of valuable accessories.
 Concentrate on the assigned practical and do not play games.
 If anyone caught red handed carrying any equipment of the lab, then he will have to face
serious consequences.

SYLLABUS:

 Develop requirements specification for a given problem (The requirements specification


should include both functional and non-functional requirements. For a set of about 20
sample problems
 Develop DFD Model (Level 0, Level 1 DFD and data dictionary) of the sample problem (Use
of a CASE tool required). (1 class)
 Develop Structured design for the DFD model developed. (1 class)
 Develop UML Use case model for a problem (Use of a CASE tool any of Rational rose, Argo
UML, or Visual Paradigm etc. is required) 5. Develop Sequence Diagrams
 Develop Class diagrams.
 Develop code for the developed class model using Java
 Use testing tool such as Junit.
 Use configuration management tool
 Use any one project management tool such as Microsoft Project or Gantt Project, etc
Overview

 In data mining we use analysis tools to discover patterns and


relationships in data, that maybe used to make valid prediction.
 In this proposed software we investigate use of datamining in
forecasting maximum temperature, rainfall, evaporation and wind
speed.
 Weather forecasting is a vital application in meteorology and has been
one of the most scientifically and technologically challenging problem
around the world.
 Primary users for the system are general public, aviation, fire and
marine.
 Aviation forecasters use them in order to keep an eye on surface
observations for wind shear, restrictions to visibility that could affect
takeoffs and landings.
 Forecasters support fire weather programs by checking for relative
humidity because it can have critical impact on behaviors of fire. Every
member of population uses weather data on regular basis, thinking of
how weather can affect your travel, activity and business decision the
list of users become longer.
 Similar service providers are National Weather Services(NSW), popular
sites like Weather Underground, forecast.io, Weather spark and Google.
Experimemt 1:

Develop requirements specifications for a

given problem

1.1 Introduction
Weather Prediction is the application of science and technology to predict atmospheric
conditions ahead of time for a particular region. Prediction is one of the basic goals of Data
Mining. Data Mining is to dig out knowledge and rules, which are hidden and unknown. User
may be interested in or has potential value for decision-making from the large amounts of data.
Such potential knowledge and rules can reveal the laws between the data. There are many
kinds of technical methods of data mining, which mainly include: association rule mining
algorithm, decision tree classification algorithm, clustering algorithm and time series mining
algorithm, etc. [1]. How to store, manage and use these massive meteorological data, discover
and understand the law and knowledge of the data, to contribute to weather forecasting
completely and effectively has attracted more and more Data Mining researcher’s attention[2].
This article constructs the Weather Forecasting platform, using data mining for meteorological
forecast and the forecast results are analyzed.

1.2 Weather Forecasting


Weather forecasting plays a significant role in meteorology [3]. Weather forecasting has
remained a formidable challenge because of its data intensive and frenzied nature. Generally,
two methods are used to forecast weather: a) the empirical approach and b) the dynamical
approach. The first approach is based on the occurrence of analogues and often referred to as
analogue forecasting. This approach is useful in predicting local scale weather if recorded cases
are plentiful. The second case is based upon equations and forward simulations of the
atmosphere and often referred to as computer modeling. Most weather prediction systems use
a combination of both of these techniques.
Published under licence by IOP Publishing Ltd

Figure 1 FAAS Models

This framework as a service (FAAS) has selected seven common forecasting methods. These
are Regression (R), Logistic Regression, Time Series, Artificial Neural Network, Random Forest,
Support Vector Machine and Multivariate Adaptive Regression Splines (MARS). For instance,
Regression may encounter them collinearity among variables. Logistic Regression could only
deal with the dataset where the dependent variable is nominal.

1.3 Weather Prediction Architecture


Artificial Neural Networks (ANN) and Decision Trees (DT) were used to analyze meteorological
data, gathered in-order to develop classification rules for the Application of Data Mining
Techniques in Weather Prediction. Artificial Neural Networks (ANN) has received special
attentions among different forecasting methods in recent years [4, 5]. Main reason for the
popularity of ANN is its capability of supervised learning from complex relations using non-
linear functions [6]. This algorithm combines with both of the time series and regressional
approaches. Weather parameters [7] over the study period use available historical data for the
prediction of future weather conditions. The targets for the prediction are those weather
changes that affect our daily life e.g. changes in minimum and maximum temperature, rainfall,
evaporation and wind speed. These techniques are often more powerful, flexible, and efficient
for exploratory analysis than the statistical techniques. The most commonly used techniques in
data mining are: artificial neural networks, logistic regression, discriminant analysis and
decision trees. By this model, temperature (T), rainfall (R) and wind (W) speed can easily be
predicted. Now Prediction method that only single parameter for example some researchers
[8], [9] used wind speed and other researchers used wind Power for prediction [10], [11]. We
have provided more accurate prediction model data and it has more parameters and efficiency.
(Figure.2)
Figure 2 Weather Data Prediction

There are three basic elements of a neuron model. Figure.3 shows the basic elements of
neuron model with the help of a perceptron model, which are, (i) a set of synapses, connecting
links, each of which is considered by a weight/strength of its own (ii) an adder, for summing the
input signals, weighted by respective neuron’s synapses (iii) an activation function, for limiting
the amplitude of neuron’s output. A typical input-output relation can be expressed as shown in
Equation 1.

Figure 3 Model of a perceptron


n
netj   W ijXi  bj
j 1

O i  f i (neti)
(1)
Where = inputs to node in input, = weight between input node and hidden node, b – bias at
node, net = adder, f = activation function.
The type of transfer/activation function affects the size of the steps taken in weight and
space [12]. ANN’s architecture needs determination of number of the connecting weights and
the way in which the information flows through this network is carried out via the number of
layers, nodes number in each layer, and their connectivity. The output nodes numbers are fixed,
according to the estimated quantities. The input nodes numbers are dependent on the existing
problem under consideration, and the modeler’s choice to utilize knowledge of domain. The
neurons in the hidden layer are enhanced gradually, and the network performance in the form
an error is examined.

1.4 Weather Forcast In Cloud Computing


Before Cloud computing has improved the efficiency of data storage, delivery, and
dissemination across multiple platforms and applications, allowing easier collaboration and
data sharing, including data processing and distribution systems that disseminate key weather
forecasting, severe weather warning, and climate information. Data mining techniques and
forecasting applications are very much needed in the cloud computing paradigm. In this study,
data mining in Cloud Computing allows weather forecasting and data storage, with assurance
of efficient, reliable and secure services for their users. The implementation of data mining
techniques through Cloud computing will allow the users to retrieve meaningful information
from virtually integrated data warehouse that will reduce the costs of infrastructure and
storage.

Experimemt 2:
Develop DFD model (level-0, level-1 DFD and Data dictionary) of the
project.

2.1 SYSTEM MODEL

A data flow diagram (DFD) is a graphical representation of the flow of data through an information
system. A data flow diagram can also be used for the visualization of data processing (structured design).
It is common practice for a designer to draw a context-level DFD first which shows the interaction
between the system and outside entities. This context-level DFD is then exploded to show more detail of
the system being modeled.

The four components of a data flow diagram (DFD) are:


• External Entities/Terminators are outside of the system being modeled. Terminators represent
where information comes from and where it goes. In designing a system, we have no idea about
what these terminators do or how they do it.
• Processes modify the inputs in the process of generating the outputs
• Data Stores represent a place in the process where data comes to rest. A DFD does not say
anything about the relative timing of the processes, so a data store might be a place to
accumulate data over a year for the annual accounting process.
• Data Flows shows how data moves between terminators, processes, and data stores (those that
cross the system boundary are known as IO or Input Output Descriptions).
Figure 2.1and 2.2 represent the Level 0 and Level 1 Data Flow Diagrams respectively.

Context level Data Flow Diagram:

2.2 PROPOSED MODEL

A. Data Collection and pre-processing


The data used for this work was collected from NOAA (National Oceanic Atmospheric Administration)
[10]. The case data covered the period of 2011 to 2014 of the month May to October. Raw weather
dataset contains 15 measured parameters like Station, Date, Mean temperature, Dew point, pressure,
Mean sea level pressure, Mean station pressure Visibility, Wind speed, Maximum sustained wind speed,
Maximum wind Gust, Max temperature, Min temperature Precipitation amount, Snow depth, Rainfall.
Out of these 15 features we used only 5 most relevant attributes like Mean Temperature, Dew point
pressure, Wind Speed, Visibility, Rainfall. We used factor analysis and linear regression techniques to find
out the most relevant attribute needed for rainfall prediction. We ignored less relevant features in the
dataset for model computation. We ignored variables like Station, Date as it had distinct value hence
these attributes cannot be used for prediction. We also ignored Station pressure, Gust, Perception
amount and Snow depth as these variables had similar duplicates values and factor reduction was not
possible. Linear regression technique results shows that Mean Temp, Visibility, Dew point and Wind
speed are the best predictors of Rainfall. Therefore we have included these attributes for prediction of
rainfall. As shown in figure 1.

Figure1 Linear Regression Results

The attribute values are numeric. The preprocessed attributes used are listed in Table 1.

Table 1. Data Description


Attribute Type Description
Mean Temperature Numerical Fahrenheit
Dewpoint Pressure Numerical Fahrenheit
Wind Speed Numerical Kmph
Visibility Numerical Kmph
Rainfall String Yes/No
B. Bayesian Rainfall prediction model
Bayesian classifiers[12] are statistical classifiers. They can predict class membership probabilities such as
the probability that a given tuple belongs to a particular class.

Figure 2 Bayesian Prediction Model

The Bayesian Classifier is capable of calculating the most probable output depending on the input. The
flow of the model is shown in Fig 2. It is possible to add new raw data at runtime and have a better
probabilistic classifier. A Naive Bayes classifier assumes that the presence (or absence) of a particular
feature of a class is unrelated to the presence (or absence) of any other feature, given the class variable.
The system consists of two functions Train classifier and Classify. Train classifier function will train the
data set by calculating mean and variance of each variable as shown in Table 2.

Table-2. Mean and Variance Measurement

Mea Var Mea Varian Mea Varia Me Vari


n ian n ce n nce an anc
Tem ce Dew Dewp Visib Visib Spe e
p Te poin oint ility ility ed Spe
mp t ed
83.5 6.7 74.9 13.9 2.5 0.16 7.8 12.2
80.2 5.4 76.8 1.99 2.36 0.18 9.1 12.6

The classifier created from the training data set using a Gaussian distribution. The Classify function finds
the probabilities using normal distribution. In order to get the probability of P(Temp/Yes) we use the
formula.
Here x is the value of the temperature from the test data. µ and σ are mean and standard deviation of
temperature calculated from the train dataset. Similar process is repeated for all the other attributes to
get the individual probability.

1) Computational illustration of rainfall prediction: For the classification as rainfall= Yes, the probability
is given by:

P(Rainfall=Yes)=(P(Rain=yes)*P(Temp/Yes)*P(Dewpoi nt=Y es)* P(Visibility=Yes)* P(WindSpeed=yes)) For


the classification as rainfall= No, the probability is given by:

P(Rainfall=No)=(P(Rain=No)*P(Temp/No)*P(Dewpoint

=No)*P(Visibility=No)*P(Wind Speed =No))

If Value of P(Rainfall= Yes) > P(Rainfall=No) then

Predict Rainfall= Yes.

If Value of P(Rainfall= Yes) <P(Rainfall=No) then 1) Computational illustration of rainfall prediction:


Predict Rainfall= No. A common weighting scheme consists in giving each neighbor a weight of 1/d,
where d is the distance to the 2) Result of rainfall prediction(Bayesian approach):
neighbour. Given test dataset unobserved instance
In this model, we have used Train datasets of Panjim city. We used the actual monsoon data of the year
(2015) as test data to compare it with the model results. The dataset and the obtained results are shown
below, in Table-2. The model observed to be accurate.

Table-3. Accuracy and Error Measurement


Datase Training Test Accuracy Error
t Dataset Dataset
Panjim 847 184 80.43% 19.56%
City

C. K-NN( K Nearest-Neighbor) Model K Nearest Neighbor classifier[14] is based on learning by analogy,


that is, by comparing a given test tuple with training tuples that are similar to it. The training tuples are
described by n attributes. Each tuple represents a point in an n-dimensional space. In this way, all the
training tuples are stored in an n-dimensional pattern space. When given an unknown tuple, a k-Nearest-
Neighbor classifier searches the pattern space for the k training tuples that are closest to the unknown
tuple.
Figure 3 K-NN Prediction Model

The flow of the model is as shown in fig 3. Here K is the number of instances used to cast the vote when
labelling previously unobserved instance. K-NN is a type of instance-based learning, or lazy learning,
where the function is only approximated locally and all computation is deferred until classification Both
for classification and regression, a useful technique can be to assign weight to the contributions of the
neighbors, so that the nearer neighbors contribute more to the average than the more distant ones.

I={i0,…,in, class}, we calculate the Euclidean distance between I and each known instance in the dataset
as follows:

Here Z is a sequence of values from train dataset of some instance i in attribute k for which a
classification is given and I is the unclassified test data instance. The distances were calculated on
normalized data. We normalize each value according to:

Z is from the dataset. The instance that we need to classify is also normalized. Once the distances are
calculated, we can proceed to vote on which class the instance I should belong to. To do this, we select K
smallest distances and look at their corresponding classes.

The value of K in this case was taken a 3.

2) Result of rainfall prediction(K-NN Approach):


In this model, we have used datasets of actual Panjim city. We used the actual monsoon data of the year
(2015) as test data to compare it with the model results. The obtained results are shown below, in Table-
4.
Experimemt 3:
Develop Structured design for the DFD model developed.
2. ALGORITHM STUDY
This section includes the study of the different algorithms that are used for the developing the
proposed model. The list of algorithms is given as:

2.1 K-Means
The K-Means clustering algorithm is a partition-based cluster analysis method. According to the
algorithm we firstly select k objects as initial cluster centers, then calculate the distance
between each object and each cluster center and assign it to the nearest cluster, update the
averages of all clusters, repeat this process until the criterion function converged. Square error
criterion for clustering

is the sample j of i-class, is the center of i-class, i is the number of samples of i-class. K-
means clustering algorithm is simply described as

Table 1 shows the K-mean algorithm steps


Input: N objects to be cluster (xj, Xz … xn),
the number of clusters k;
Output: k clusters and the sum of
dissimilarity between each object and its
nearest cluster center is the smallest;
Process:
 Arbitrarily select k
objects as initial cluster
centers
;
 Calculate the distance between each
object Xi and each cluster center, then
assign each object to the nearest cluster,
formula for calculating distance as:

is the distance between data i and


cluster j.
 Calculate the mean of objects in
each cluster as the new cluster centers,

is the number of samples of current


cluster i;
 Repeat 2) 3) until the
criterion function E
converged, return Algorithm terminates.

2.2 HMM
An HMM is a double implanted stochastic process with two hierarchy levels. It can be used to
model much more complex stochastic processes as compared to a traditional Markov model. In
a specific state, an observation can be generated according to an associated probability
distribution. It is only the observation and not the state that is visible to an external observer.
An HMM can be characterized by the following:

 N is the number of states in the model. We denote the set of states’ S = {S1; S2;..., SN},
where Si, i= 1;2;...;N is an individual state. The state at time instant t is denoted by qt.

 M is the number of distinct observation symbols per state. We denote the set of
symbols

V = {V1; V2; ...VM}, where Vi, I = 1;2;...; M is an individual symbol.

 The state transition probability matrix A = [aij], where


Here aij> 0 for all i, j. Also,

 The remark symbol probability matrix B = {bj(k)}, where


 The initial state probability vector r , where

Such that

 The remark sequence O = O1; O2; O3; ...OR, where each remark Ot is one of the symbols from V,
and R is the number of remarks in the sequence.
It is manifest that a complete specification of an HMM needs the approximation of two model
parameters, N and M, and three possibility distributions A, B, and . We use the notation ٨ = (A; B; )
to specify the complete set of parameters of the model, where A, B implicitly contain N and M.

An observation sequence O, as mentioned above, can be generated by many possible state


sequences. Consider one such particular sequence Q = q 1; q2; ...; qR; where q1 is the initial state. The
probability that O is generated from this state sequence is given by

Where statistical independence of observations is assumed Above Equation can be expanded as

The probability of the state sequence Q is given as

Thus, the probability of generation of the observation sequence O by the HMM specified by can be
written as follows:

Deriving the value of using the direct definition of is


computationally intensive. Hence, a procedure named as Forward-
Backward procedure is used to compute .
Experimemt 4:
.Develop UML Use case model for a problem.

Unified Modeling Language


Unified Modeling Language (UML) is a general purpose modelling language. The

main aim of UML is to define a standard way to visualize the way a system has been

designed. It is quite similar to blueprints used in other fields of engineering.

UML is not a programming language, it is rather a visual language. We use UML

diagrams to portray the behavior and structure of a system. UML helps software

engineers, businessmen and system architects with modelling, design and analysis.

The Object Management Group (OMG) adopted Unified Modelling Language as a

standard in 1997. Its been managed by OMG ever since. International Organization

for Standardization (ISO) published UML as an approved standard in 2005. UML has

been revised over the years and is reviewed periodically.

Do we really need UML?


• Complex applications need collaboration and planning from multiple teams and
hence require a clear and concise way to communicate amongst them.
• Businessmen do not understand code. So UML becomes essential to communicate
with non programmers essential requirements, functionalities and processes of
the system.
• A lot of time is saved down the line when teams are able to visualize processes,
user interactions and static structure of the system.
UML is linked with object oriented design and analysis. UML makes the use of

elements and forms associations between them to form diagrams. Diagrams in UML

can be broadly classified as:

1. Structural Diagrams – Capture static aspects or structure of a system. Structural

Diagrams include: Component Diagrams, Object Diagrams, Class Diagrams and


Deployment Diagrams.
2. Behavior Diagrams – Capture dynamic aspects or behavior of the system.

Behavior diagrams include: Use Case Diagrams, State Diagrams, Activity


Diagrams and Interaction Diagrams.
The image below shows the hierarchy of diagrams according to UML 2.2
Object Oriented Concepts Used in UML –

1. Class – A class defines the blue print i.e. structure and functions of an object.

2. Objects – Objects help us to decompose large systems and help us to modularize

our system. Modularity helps to divide our system into understandable


components so that we can build our system piece by piece. An object is the
fundamental unit (building block) of a system which is used to depict an entity.
3. Inheritance – Inheritance is a mechanism by which child classes inherit the

properties of their parent classes.


4. Abstraction – Mechanism by which implementation details are hidden from user.

5. Encapsulation – Binding data together and protecting it from the outer world is

referred to as encapsulation.
6. Polymorphism – Mechanism by which functions or entities are able to exist in

different forms.

Experimemt 5:

Develop sequence Diagram

a) Sequence Diagram-
Sequence diagrams are used to demonstrate the behavior of objects in a
usecase by describing
the objects and the messages they pass. The diagrams are read from left to
right and
descending. Here first user interact with NewWeather which send message to
login, n shows
NN GUI. After that weight initialize to frmNeural. frmNeural send message to
ParseTree
which send message to TreeNode. and finally ParseTree send message to
DataPoint. At last
ParseTree generate output.

b) Collaboration Diagram-
The second interaction diagram is collaboration diagram. It
shows the object organization as shown below. Collaboration diagram shows
the relationship
between objects and the order of messages passed between object. The
objects are listed as
icons and arrows indicate the messages being passed between objects. The
numbers next to
the messages are called sequence numbers. As the name suggests, they
show the sequence of

the messages as they are passed between the objects .


Experimemt 6:
Develop Class diagrams

UML Class Diagram

An overview of a system, its classes and the relationships among them is


gives by a Class
diagram . Class diagrams are static, they display what interacts but not what
happens when
they do interact so it describes the structure of a system by showing the
system's classes, their
attributes, and the relationships between the classes.
Since the class diagram is the main building block in object oriented
modeling, Class diagram
is used both for general conceptual modeling of the application and for
detailed modeling;
translating the models into programming code.
The class diagram Fig shows the classes of the Weather Forecasting System.
NewWeatherForecast is the main class of the system. It coordinates with
login class. Login
class coordinates with frmNeural.
Class frmNeural describes neural networks weight changes, overall
network error after each
epoch and calculate the outputs of the hidden neurons. ParseTree class
describe extracting
Decision Rules or Decision Trees from the trained network
Experiment 7:

Using test tool such as junit

Testing is the process of checking the functionality of the application


whether it is working as per requirements and to ensure that at developer
level, unit testing comes into picture. Unit testing is the testing of single
entity (class or method). Unit testing is very essential to every software
company to give a quality product to their customers.
Unit testing can be done in two ways

Manual testing Automated testing


Executing the test cases manually without any tool

Taking tool support and executing the


test cases support is known as manual testing. by using automation tool is known as
automation
● Time consuming and tedious: Since test cases testing. are executed by human
resources so it is very slow
and tedious. ● Fast Automation runs test cases

significantly faster than human resources.


● Huge investment in human resources: As test Less investment in human
resources:Test cases need to be executed manually so more testers cases are executed
by using automation tool so
are required in manual testing.

less tester are required in automation testing.


● Less reliable: Manual testing is less reliable More reliable: Automation tests perform
as tests may not be performed with precision each precisely same operation each time
they are run.
time because of human errors.

● Non-programmable: No programming can be ● Programmable: Testers can


program
sophisticated tests to bring
out hidden done to write sophisticated tests which fetch hidden information.
information.
What is JUnit ?

JUnit is a unit testing framework for the Java Programming Language. It is important in the test
driven development, and is one of a family of unit testing frameworks collectively known as xUnit.

JUnit promotes the idea of "first testing then coding", which emphasis on setting up the test data
for a piece of code which can be tested first and then can be implemented . This approach is like
"test a little, code a little, test a little, code a little..." which increases programmer productivity
and stability of program code that reduces programmer stress and the time spent on debugging.

Features
● JUnit is an open source framework which is used for writing & running tests.

● Provides Annotation to identify the test methods.

● Provides Assertions for testing expected results.

● Provides Test runners for running tests.

● JUnit tests allow you to write code faster which increasing quality ● JUnit is elegantly

simple. It is less complex & takes less time.

● JUnit tests can be run automatically and they check their own results and provide immediate
feedback. There's no need to manually comb through a report of test results.

● JUnit tests can be organized into test suites containing test cases and even other test suites.

● Junit shows test progress in a bar that is green if test is going fine and it turns red when a test
fails
Experiment 8:

Using configuration tool such as libra

Installation and Use


The Libra features can be installed from the p2 repository of the Indigo Simultaenous Release
(since Indigo M6). As a prerequisite you may install Eclipse IDE for Java EE Developers.

The update site contains:

● OSGi Bundle Facet feature that introduces:


1. A new facet OSGi Bundle for Dynamic Web, JPA and Utility projects.
2. Wizard for converting WTP standard projects to OSGi Enterprise bundle projects:
● Dynamic Web projects to Web Application Bundle projects
● JPA projects to Persistent Bundle projects
● Utility projects and simple Java projects to OSGi Bundle projects

Both options modify project's MANIFEST.MF in order to become a valid OSGi bundle.

The facet may be enabled during the project creation or after that from the Properties page of
the project. The wizard is available from project's context menu Configure > Convert to OSGi
Bundle Projects...

Note that you may need to adjust your target platform accordingly.

● WAR Products feature which provides WAR deployment for Equinox based applications

Create new Web Application Bundle


1. Call the New Dymanic Web Project wizard: New > Project... > Web > Dynamic Web
Project
2. Enter the necessary project information like Project name, Target runtme, etc.
3. Add the OSGi Bundle facet in the Configuration:
1. Click on the Modify... button in the Configuraton group.
2. Choose the OSGi Bundle facet in the Project Facets dialog and click OK.
36 APPENDIX A. MATLAB FUNCTIONS
4. Click Finish to create the Web Application Bundle project.

Create new OSGi Bundle


1. Call the New Faceted Project wizard: New > Project... > General > Faceted Project
2. Enter the necessary project information like Project name.
3. Click the Next button.
4. Select the OSGi Bundle and Java facets.
5. Click Finish to create the OSGi Bundle project.

Obtaining Sources
You can find the sources available in Git repository

In order to synchronize them locally, you may use the EGit step-by-step procedure.

The EGit/User Guide provides detailed instruction how to work with EGit.

Updating/Installing EGit
● Start your Eclipse IDE and navigate to Help->Install New Software->Add...
● Enter the software update site [1]
Select the Eclipse EGit (Incubation) and Eclipse JGit (Incubation) and
choose Next> to finish the installation

You might also like