Lab Manual: Jaipur Engineering College and Research Center, Jaipur
Lab Manual: Jaipur Engineering College and Research Center, Jaipur
Lab Manual: Jaipur Engineering College and Research Center, Jaipur
: 2nd Year
1 VISION/MISION
2. PEO
3. POS
4. COS
5. MAPPING OF CO & PO
6. SYLLABUS
7. BOOKS
8. INSTRUCTIONAL METHODS
9. LEARNING MATERIALS
10. ASSESSMENT OF OUTCOMES
LIST OF EXPERIMENTS (RTU SYLLABUS)
Exp:- 2 Objectives: - 2. Develop DFD model (level-0, level-1 DFD and Data
dictionary) of the project.
Submitted By:-
Anurag Sharma
Arin Mangal
Aryan Khandelwal
To the best of my knowledge, the matter embodied in the project is true and accurate.
Date – 14/10/2019
JECRC Foundation
(Prof. Suniti)
Dept. of Computer Science and Engineering
Abstract
The weather conditions are changing continuously and the entire world is suffers from the changing
Clemet and their side effects. Therefore pattern on changing weather conditions are required to
observe. With this aim the proposed work is intended to investigate about the weather condition
pattern and their forecasting model. On the other hand data mining technique enables us to analyse the
data and extract the valuable patterns from the data. Therefore in order to understand fluctuating
patterns of the weather conditions the data mining based predictive model is reported in this work. The
proposed data model analyse the historical weather data and identify the significant on the data. These
identified patterns from the historical data enable us to approximate the upcoming weather conditions
and their outcomes. To design and develop such an accurate data model a number of techniques are
reviewed and most promising approaches are collected. Thus the proposed data model incorporates the
Hidden Markov Model for prediction and for extraction of the weather condition observations the K-
means clustering is used. For predicting the new or upcoming conditions the system need to accept the
current scenarios of weather conditions. The implementation of the proposed technique is performed
on the JAVA technology. Additionally for justification of the proposed model the comparative study with
the traditional ID3 algorithm is used. To compare both the techniques the accuracy, error rate and the
time and space complexity is estimated as the performance parameters. According to the obtained
results the performance of the proposed technique is found enhanced as compared to available ID3
based technique.
Acknowledgments
We express our profound gratitude and indebtedness to Prof. Suniti, Department of Computer Science
and Engineering, JECRC Foundation, Jaipur for introducing the present topic and for their inspiring
intellectual guidance, constructive criticism and valuable suggestion throughout the project work.
We are also thankful to other staffs in Department of Computer Science and Engineering for motivating
us in improving the project.
Finally we would like to thank our parents for their support to complete this project.
Date – 14/10/2019
JECRC Foundation
Anurag Sharma
Arin Mangal
Aryan Khandelwal
JAIPUR ENGINEERING COLLEGE AND RESEARCH CENTER
MISSION:
M1: To impart outcome based education for emerging technologies in the field of computer science and
engineering.
M2: To provide opportunities for interaction between academia and industry.
M3: To provide platform for lifelong learning by accepting the change in technologies
M4: To develop aptitude of fulfilling social responsibilities
PEO
1..To provide students with the fundamentals of Engineering Sciences with more emphasis in
Computer Science &Engineering by way of analyzing and exploiting engineering challenges. 2.To train
students with good scientific and engineering knowledge so as to comprehend, analyze, design, and
create novel products and solutions for the real life problems.
3.To inculcate professional and ethical attitude, effective communication skills, teamwork skills,
multidisciplinary approach, entrepreneurial thinking and an ability to relate engineering issues with
social issues.
4.To provide students with an academic environment aware of excellence, leadership, written ethical
codes and guidelines, and the self motivated life-long learning needed for a successful professional
career.
5. To prepare students to excel in Industry and Higher education by Educating Students along with High
moral values and Knowledge
2. PROGRAM OUTCOMES
Engineering Knowledge: Apply the knowledge of mathematics, science, engineering fundamentals, and
an engineering specialization to the solution of complex engineering problems in IT.
Problem analysis: Identify, formulate, research literature, and analyze complex engineering problems
reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences in IT.
Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental considerations using IT.
Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions using IT.
Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with an
understanding of the limitations in IT.
The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice using IT.
Environment and sustainability:Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development in IT.
Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice using IT.
Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings in IT.
Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and receive clear instructions.
Project Management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to
manage IT projects and in multidisciplinary environments.
Life –long Learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological changes needed in IT.
3. MAPPING OF PEOs & POs
I H L H
II M H M H H L H
III L H M H L M
IV L M H M H M
V M M
4.COURSE OUTCOME
CO1: Create and specify a software design based on the requirement specification that
the software can be implemented based on the design.
CO2: To develop, design and Implement structured DFD and UML Class diagram.
5. MAPPING OF CO & PO
II H H H H H L L L H M H M
INSTRUCTIONS OF LAB
DO’s
Please switch off the Mobile/Cell phone before entering Lab.
Enter the Lab with complete source code and data.
Check whether all peripheral are available at your desktop before proceeding for program.
Intimate the lab In charge whenever you are incompatible in using the system or in case
software get corrupted/ infected by virus.
Arrange all the peripheral and seats before leaving the lab.
Properly shutdown the system before leaving the lab.
Keep the bag outside in the racks.
Enter the lab on time and leave at proper time.
Maintain the decorum of the lab.
Utilize lab hours in the corresponding experiment.
Get your Cd / Pen drive checked by lab In charge before using it in the lab.
DON’TS
No one is allowed to bring storage devices like Pan Drive /Floppy etc. in the lab.
Don’t mishandle the system.
Don’t leave the system on standing for long
Don’t bring any external material in the lab.
Don’t make noise in the lab.
Don’t bring the mobile in the lab. If extremely necessary then keep ringers off.
Don’t enter in the lab without permission of lab Incharge.
Don’t litter in the lab.
Don’t delete or make any modification in system files.
Don’t carry any lab equipments outside the lab.
We need your full support and cooperation for smooth functioning of the projec
INSTRUCTIONS FOR STUDENT
All the students are supposed to prepare the theory regarding the next program.
Students are supposed to bring the practical file and the lab copy.
Previous programs should be written in the practical file.
Any student not following these instructions will be denied entry in the lab.
SYLLABUS:
given problem
1.1 Introduction
Weather Prediction is the application of science and technology to predict atmospheric
conditions ahead of time for a particular region. Prediction is one of the basic goals of Data
Mining. Data Mining is to dig out knowledge and rules, which are hidden and unknown. User
may be interested in or has potential value for decision-making from the large amounts of data.
Such potential knowledge and rules can reveal the laws between the data. There are many
kinds of technical methods of data mining, which mainly include: association rule mining
algorithm, decision tree classification algorithm, clustering algorithm and time series mining
algorithm, etc. [1]. How to store, manage and use these massive meteorological data, discover
and understand the law and knowledge of the data, to contribute to weather forecasting
completely and effectively has attracted more and more Data Mining researcher’s attention[2].
This article constructs the Weather Forecasting platform, using data mining for meteorological
forecast and the forecast results are analyzed.
This framework as a service (FAAS) has selected seven common forecasting methods. These
are Regression (R), Logistic Regression, Time Series, Artificial Neural Network, Random Forest,
Support Vector Machine and Multivariate Adaptive Regression Splines (MARS). For instance,
Regression may encounter them collinearity among variables. Logistic Regression could only
deal with the dataset where the dependent variable is nominal.
There are three basic elements of a neuron model. Figure.3 shows the basic elements of
neuron model with the help of a perceptron model, which are, (i) a set of synapses, connecting
links, each of which is considered by a weight/strength of its own (ii) an adder, for summing the
input signals, weighted by respective neuron’s synapses (iii) an activation function, for limiting
the amplitude of neuron’s output. A typical input-output relation can be expressed as shown in
Equation 1.
O i f i (neti)
(1)
Where = inputs to node in input, = weight between input node and hidden node, b – bias at
node, net = adder, f = activation function.
The type of transfer/activation function affects the size of the steps taken in weight and
space [12]. ANN’s architecture needs determination of number of the connecting weights and
the way in which the information flows through this network is carried out via the number of
layers, nodes number in each layer, and their connectivity. The output nodes numbers are fixed,
according to the estimated quantities. The input nodes numbers are dependent on the existing
problem under consideration, and the modeler’s choice to utilize knowledge of domain. The
neurons in the hidden layer are enhanced gradually, and the network performance in the form
an error is examined.
Experimemt 2:
Develop DFD model (level-0, level-1 DFD and Data dictionary) of the
project.
A data flow diagram (DFD) is a graphical representation of the flow of data through an information
system. A data flow diagram can also be used for the visualization of data processing (structured design).
It is common practice for a designer to draw a context-level DFD first which shows the interaction
between the system and outside entities. This context-level DFD is then exploded to show more detail of
the system being modeled.
The attribute values are numeric. The preprocessed attributes used are listed in Table 1.
The Bayesian Classifier is capable of calculating the most probable output depending on the input. The
flow of the model is shown in Fig 2. It is possible to add new raw data at runtime and have a better
probabilistic classifier. A Naive Bayes classifier assumes that the presence (or absence) of a particular
feature of a class is unrelated to the presence (or absence) of any other feature, given the class variable.
The system consists of two functions Train classifier and Classify. Train classifier function will train the
data set by calculating mean and variance of each variable as shown in Table 2.
The classifier created from the training data set using a Gaussian distribution. The Classify function finds
the probabilities using normal distribution. In order to get the probability of P(Temp/Yes) we use the
formula.
Here x is the value of the temperature from the test data. µ and σ are mean and standard deviation of
temperature calculated from the train dataset. Similar process is repeated for all the other attributes to
get the individual probability.
1) Computational illustration of rainfall prediction: For the classification as rainfall= Yes, the probability
is given by:
P(Rainfall=No)=(P(Rain=No)*P(Temp/No)*P(Dewpoint
The flow of the model is as shown in fig 3. Here K is the number of instances used to cast the vote when
labelling previously unobserved instance. K-NN is a type of instance-based learning, or lazy learning,
where the function is only approximated locally and all computation is deferred until classification Both
for classification and regression, a useful technique can be to assign weight to the contributions of the
neighbors, so that the nearer neighbors contribute more to the average than the more distant ones.
I={i0,…,in, class}, we calculate the Euclidean distance between I and each known instance in the dataset
as follows:
Here Z is a sequence of values from train dataset of some instance i in attribute k for which a
classification is given and I is the unclassified test data instance. The distances were calculated on
normalized data. We normalize each value according to:
Z is from the dataset. The instance that we need to classify is also normalized. Once the distances are
calculated, we can proceed to vote on which class the instance I should belong to. To do this, we select K
smallest distances and look at their corresponding classes.
2.1 K-Means
The K-Means clustering algorithm is a partition-based cluster analysis method. According to the
algorithm we firstly select k objects as initial cluster centers, then calculate the distance
between each object and each cluster center and assign it to the nearest cluster, update the
averages of all clusters, repeat this process until the criterion function converged. Square error
criterion for clustering
is the sample j of i-class, is the center of i-class, i is the number of samples of i-class. K-
means clustering algorithm is simply described as
2.2 HMM
An HMM is a double implanted stochastic process with two hierarchy levels. It can be used to
model much more complex stochastic processes as compared to a traditional Markov model. In
a specific state, an observation can be generated according to an associated probability
distribution. It is only the observation and not the state that is visible to an external observer.
An HMM can be characterized by the following:
N is the number of states in the model. We denote the set of states’ S = {S1; S2;..., SN},
where Si, i= 1;2;...;N is an individual state. The state at time instant t is denoted by qt.
M is the number of distinct observation symbols per state. We denote the set of
symbols
Such that
The remark sequence O = O1; O2; O3; ...OR, where each remark Ot is one of the symbols from V,
and R is the number of remarks in the sequence.
It is manifest that a complete specification of an HMM needs the approximation of two model
parameters, N and M, and three possibility distributions A, B, and . We use the notation ٨ = (A; B; )
to specify the complete set of parameters of the model, where A, B implicitly contain N and M.
Thus, the probability of generation of the observation sequence O by the HMM specified by can be
written as follows:
main aim of UML is to define a standard way to visualize the way a system has been
diagrams to portray the behavior and structure of a system. UML helps software
engineers, businessmen and system architects with modelling, design and analysis.
standard in 1997. Its been managed by OMG ever since. International Organization
for Standardization (ISO) published UML as an approved standard in 2005. UML has
elements and forms associations between them to form diagrams. Diagrams in UML
1. Class – A class defines the blue print i.e. structure and functions of an object.
5. Encapsulation – Binding data together and protecting it from the outer world is
referred to as encapsulation.
6. Polymorphism – Mechanism by which functions or entities are able to exist in
different forms.
Experimemt 5:
‘
a) Sequence Diagram-
Sequence diagrams are used to demonstrate the behavior of objects in a
usecase by describing
the objects and the messages they pass. The diagrams are read from left to
right and
descending. Here first user interact with NewWeather which send message to
login, n shows
NN GUI. After that weight initialize to frmNeural. frmNeural send message to
ParseTree
which send message to TreeNode. and finally ParseTree send message to
DataPoint. At last
ParseTree generate output.
b) Collaboration Diagram-
The second interaction diagram is collaboration diagram. It
shows the object organization as shown below. Collaboration diagram shows
the relationship
between objects and the order of messages passed between object. The
objects are listed as
icons and arrows indicate the messages being passed between objects. The
numbers next to
the messages are called sequence numbers. As the name suggests, they
show the sequence of
●
● Huge investment in human resources: As test Less investment in human
resources:Test cases need to be executed manually so more testers cases are executed
by using automation tool so
are required in manual testing.
●
● Less reliable: Manual testing is less reliable More reliable: Automation tests perform
as tests may not be performed with precision each precisely same operation each time
they are run.
time because of human errors.
JUnit is a unit testing framework for the Java Programming Language. It is important in the test
driven development, and is one of a family of unit testing frameworks collectively known as xUnit.
JUnit promotes the idea of "first testing then coding", which emphasis on setting up the test data
for a piece of code which can be tested first and then can be implemented . This approach is like
"test a little, code a little, test a little, code a little..." which increases programmer productivity
and stability of program code that reduces programmer stress and the time spent on debugging.
Features
● JUnit is an open source framework which is used for writing & running tests.
● JUnit tests allow you to write code faster which increasing quality ● JUnit is elegantly
● JUnit tests can be run automatically and they check their own results and provide immediate
feedback. There's no need to manually comb through a report of test results.
● JUnit tests can be organized into test suites containing test cases and even other test suites.
● Junit shows test progress in a bar that is green if test is going fine and it turns red when a test
fails
Experiment 8:
Both options modify project's MANIFEST.MF in order to become a valid OSGi bundle.
The facet may be enabled during the project creation or after that from the Properties page of
the project. The wizard is available from project's context menu Configure > Convert to OSGi
Bundle Projects...
Note that you may need to adjust your target platform accordingly.
● WAR Products feature which provides WAR deployment for Equinox based applications
Obtaining Sources
You can find the sources available in Git repository
In order to synchronize them locally, you may use the EGit step-by-step procedure.
The EGit/User Guide provides detailed instruction how to work with EGit.
Updating/Installing EGit
● Start your Eclipse IDE and navigate to Help->Install New Software->Add...
● Enter the software update site [1]
Select the Eclipse EGit (Incubation) and Eclipse JGit (Incubation) and
choose Next> to finish the installation