Lab Manual: Jawaharlal Nehru Engineering College Aurangabad

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

MGM’s

Jawaharlal Nehru Engineering College Aurangabad


Affiliated to Dr.B.A.Technological University , Lonere Maharashtra
ISO 9001:2015,140001:2015 Certified, AICTE Approved

Department of Computer Science & Engineering

LAB MANUAL

Programme (UG/PG) : UG

Year : Final Year

Semester : VII

Course Code :

Course Title : Data Warehousing and Data Mining.

Prepared By
A.H.Telgaonkar
Assistant Professor
Department of Computer Science & Engineering
FOREWORD

It is my great pleasure to present this laboratory manual for Final year engineering
students for the subject of Data Warehousing and Data Mining

As a student, many of you may be wondering with some of the questions in your
mind regarding the subject and exactly what has been tried is to answer through
this manual.

As you may be aware that MGM has already been awarded with ISO 9001:2015,
140001:2015 certification and it is our endure to technically equip our students
taking the advantage of the procedural aspects of ISO Certification.

Faculty members are also advised that covering these aspects in initial stage itself,
will greatly relieved them in future as much of the load will be taken care by the
enthusiasm energies of the students once they are conceptually clear.

Dr. H. H. Shinde
Principal
LABORATORY MANUAL CONTENTS

This manual is intended for FIANL YEAR COMPUTER SECINCE students for
the subject of Data Warehousing and Data Mining. This manual typically
contains practical/Lab Sessions related Data warehousing and data mining
covering various aspects related the subject to enhanced understanding.

Students are advised to thoroughly go through this manual rather than only topics
mentioned in the syllabus as practical aspects are the key to understanding and
conceptual visualization of theoretical aspects covered in the books.

Good Luck for your Enjoyable Laboratory Sessions

Dr. V.B. Musande Ms.A.H.Telgaonkar

HOD, CSE CSE Dept


DOs and DON’Ts in Laboratory:

1. Make entry in the Log Book as soon as you enter the Laboratory.

2. All the students should sit according to their roll numbers starting from their left to right.

3. All the students are supposed to enter the terminal number in the log book.

4. Do not change the terminal on which you are working.

5. All the students are expected to get at least the algorithm of the program/concept to be
implemented.

6. Strictly observe the instructions given by the teacher/Lab Instructor.

Instruction for Laboratory Teachers::

1. Submission related to whatever lab work has been completed should be done during the next
lab session. The immediate arrangements for printouts related to submission on the day of
practical assignments.

2. Students should be taught for taking the printouts under the observation of lab teacher.

3. The promptness of submission should be encouraged by way of marking and evaluation


patterns that will benefit the sincere students.
SUBJECT INDEX

Sr. Title Page


No. No.

SET-I

1 Implementation of OLAP operations

2 Implementation of Varying Arrays.

3 Implementation of Nested Tables .

4 Demonstration of any ETL tool.

5 Write a program of apriori algorithm using any programming


language.

6 Write a program of naive Bayesian classification using c.

7 Write a program of cluster analysis using simple k-means


algorithm using any programming language.

8 A case study of Business Intelligence in Government sector/Social


Networking/Business.

Sr. Title Page


No. No.

SET-II

1 Create data-set in arff file format. Demonstration of preprocessing


on WEKA data-set.

2 Demonstration of Association rule process on data-set contact


lenses.arff /supermarket using apriori algorithm.

3 Demonstration of classification rule process on WEKA data-set


using j48 algorithm.

4 Demonstration of classification rule process on WEKA data-set


using naive bayes algorithm.

5 Demonstration of clustering rule process on data-set iris.arff


using simple k-means.

Assignment: 1
Implementation of OLAP operations

S/w Requirement: ORACLE, DB2.

Objective:

• To learn fundamental of data warehousing


• To learn concepts of dimensional modeling
• To learn star, snowflake & Galaxy schema

Reference:

• SQL‐PL/SQL by Ivan Bayrose


• Data Mining Concept and Technique By Han & Kamber
• Data Warehousing Fundamentals By Paulraj
• Data warehousing & Mining By Reema Thereja

Pre‐requisite:

• Fundamental Knowledge of Database Management


• Fundamental Knowledge of SQL

Description:

OLAP is an acronym for On Line Analytical Processing. Online Analytical Processing: An OLAP
system manages large amount of historical data, provides facilities for summarization and aggregation,
and stores and manages information at different levels of granularity.

OLAP Operations

Since OLAP servers are based on multidimensional view of data, we will discuss OLAP operations in
multidimensional data.

Here is the list of OLAP operations:

 Roll-up
 Drill-down
 Slice and dice
 Pivot (rotate)

Roll-up
Roll-up performs aggregation on a data cube in any of the following ways:

 By climbing up a concept hierarchy for a dimension


 By dimension reduction

The following diagram illustrates how roll-up works.

 Roll-up is performed by climbing up a concept hierarchy for the dimension location.


 Initially the concept hierarchy was "street < city < province < country".
 On rolling up, the data is aggregated by ascending the location hierarchy from the level of city to
the level of country.
 The data is grouped into cities rather than countries.
 When roll-up is performed, one or more dimensions from the data cube are removed.

Drill-down
Drill-down is the reverse operation of roll-up. It is performed by either of the following ways:

 By stepping down a concept hierarchy for a dimension


 By introducing a new dimension.

The following diagram illustrates how drill-down works:


 Drill-down is performed by stepping down a concept hierarchy for the
dimension time.
 Initially the concept hierarchy was "day < month < quarter < year."
 On drilling down, the time dimension is descended from the level of quarter to
the level of month.
 When drill-down is performed, one or more dimensions from the data cube are
added.
 It navigates the data from less detailed data to highly detailed data.

Slice
The slice operation selects one particular dimension from a given cube and provides a new
sub-cube. Consider the following diagram that shows how slice works.
 Here Slice is performed for the dimension "time" using the criterion time = "Q1".
 It will form a new sub-cube by selecting one or more dimensions.

Dice
Dice selects two or more dimensions from a given cube and provides a new sub-cube.
Consider the following diagram that shows the dice operation.
The dice operation on the cube based on the following selection criteria involves three
dimensions.

 (location = "Toronto" or "Vancouver")


 (time = "Q1" or "Q2")
 (item =" Mobile" or "Modem")

Pivot
The pivot operation is also known as rotation. It rotates the data axes in view in order to
provide an alternative presentation of data. Consider the following diagram that shows the
pivot operation.

Post lab assignment:

1. Star schema vs snowflake schema


2. Dimensional table Vs. Relational Table
3. Advantages of snowflake schema
Assignment: 2

Implementation of Varying Arrays.

S/w Requirement: ORACLE, DB2.

Objective:

• To learn fundamental of data warehousing


• To learn concepts of dimensional modeling
• To learn star, snowflake & Galaxy schema

Reference:

• SQL‐PL/SQL by Ivan Bayrose


• Data Mining Concept and Technique By Han & Kamber
• Data Warehousing Fundamentals By Paulraj
• Data warehousing & Mining By Reema Thereja

Pre‐requisite:

• Fundamental Knowledge of Database Management


• Fundamental Knowledge of SQL

Theory:
PL/SQL programming language provides a data structure called the VARRAY, which can store
a fixed-size sequential collection of elements of the same type. A varray is used to store an
ordered collection of data, but it is often more useful to think of an array as a collection of
variables of the same type.

All varrays consist of contiguous memory locations. The lowest address corresponds to the first
element and the highest address to the last element.

Creating a Varray Type


A varray type is created with the CREATE TYPE statement. You must specify the maximum
size and the type of elements stored in the varray.

The basic syntax for creating a VRRAY type at the schema level is:

CREATE OR REPLACE TYPE varray_type_name IS VARRAY(n) of <element_type>


Where,

 varray_type_name is a valid attribute name,


 n is the number of elements (maximum) in the varray,
 element_type is the data type of the elements of the array.
Maximum size of a varray can be changed using the ALTER TYPE statement.

For example,

CREATE Or REPLACE TYPE namearray AS VARRAY(3) OF VARCHAR2(10);


/

Type created.

The basic syntax for creating a VRRAY type within a PL/SQL block is:

TYPE varray_type_name IS VARRAY(n) of <element_type>

For example:

TYPE namearray IS VARRAY(5) OF VARCHAR2(10);


Type grades IS VARRAY(5) OF INTEGER;

Example 1
The following program illustrates using varrays:

DECLARE
type namesarray IS VARRAY(5) OF VARCHAR2(10);
type grades IS VARRAY(5) OF INTEGER;
names namesarray;
marks grades;
total integer;
BEGIN
names := namesarray('Kavita', 'Pritam', 'Ayan', 'Rishav', 'Aziz');
marks:= grades(98, 97, 78, 87, 92);
total := names.count;
dbms_output.put_line('Total '|| total || ' Students');
FOR i in 1 .. total LOOP
dbms_output.put_line('Student: ' || names(i) || '
Marks: ' || marks(i));
END LOOP;
END;
/

When the above code is executed at SQL prompt, it produces the following result:

Student: Kavita Marks: 98


Student: Pritam Marks: 97
Student: Ayan Marks: 78
Student: Rishav Marks: 87
Student: Aziz Marks: 92
PL/SQL procedure successfully completed.

Note:

 In oracle environment, the starting index for varrays is always 1.

 You can initialize the varray elements using the constructor method of the varray type, which
has the same name as the varray.

 Varrays are one-dimensional arrays.

 A varray is automatically NULL when it is declared and must be initialized before its elements
can be referenced.

Post lab assignment:

1. Advantages of varrays
Assignment: 3

Implementation of Nested Tables.


S/w Requirement: ORACLE, DB2.

Objective:

• To learn fundamental of data warehousing


• To learn concepts of dimensional modeling
• To learn star, snowflake & Galaxy schema

Reference:

• SQL‐PL/SQL by Ivan Bayrose


• Data Mining Concept and Technique By Han & Kamber
• Data Warehousing Fundamentals By Paulraj
• Data warehousing & Mining By Reema Thereja

Pre‐requisite:

• Fundamental Knowledge of Database Management


• Fundamental Knowledge of SQL

A collection is an ordered group of elements having the same data type. Each element is
identified by a unique subscript that represents its position in the collection.

PL/SQL provides three collection types:

 Index-by tables or Associative array


 Nested table
 Variable-size array or Varray
Oracle documentation provides the following characteristics for each type of collections:
Can Be
Number of Subscript Dense or Where Object
Collection Type
Elements Type Sparse Created Type
Attribute

Only in
String or
Associative array (or index-by table) Unbounded Either PL/SQL No
integer
block

Starts Either in
dense, PL/SQL
Nested table Unbounded Integer can block or at Yes
become schema
sparse level

Either in
Always
Variable-size array (Varray) Bounded Integer PL/SQL Yes
dense
block or at
schema
level

We have already discussed varray in the chapter 'PL/SQL arrays'. In this chapter, we will
discuss PL/SQL tables.

Both types of PL/SQL tables, i.e., index-by tables and nested tables have the same structure and
their rows are accessed using the subscript notation. However, these two types of tables differ in
one aspect; the nested tables can be stored in a database column and the index-by tables cannot.

Index-By Table
An index-by table (also called an associative array) is a set of key-value pairs. Each key is
unique and is used to locate the corresponding value. The key can be either an integer or a
string.
An index-by table is created using the following syntax. Here, we are creating an index-by table
namedtable_name whose keys will be of subscript_type and associated values will be
of element_type

TYPE type_name IS TABLE OF element_type [NOT NULL] INDEX BY


subscript_type;

table_name type_name;

Example:
Following example shows how to create a table to store integer values along with names and
later it prints the same list of names.

DECLARE
TYPE salary IS TABLE OF NUMBER INDEX BY VARCHAR2(20);
salary_list salary;
name VARCHAR2(20);
BEGIN
-- adding elements to the table
salary_list('Rajnish') := 62000;
salary_list('Minakshi') := 75000;
salary_list('Martin') := 100000;
salary_list('James') := 78000;

-- printing the table


name := salary_list.FIRST;
WHILE name IS NOT null LOOP
dbms_output.put_line
('Salary of ' || name || ' is ' || TO_CHAR(salary_list(name)));
name := salary_list.NEXT(name);
END LOOP;
END;
/

When the above code is executed at SQL prompt, it produces the following result:
Salary of Rajnish is 62000
Salary of Minakshi is 75000
Salary of Martin is 100000
Salary of James is 78000

PL/SQL procedure successfully completed.

Example:
Elements of an index-by table could also be a %ROWTYPE of any database table or %TYPE of
any database table field. The following example illustrates the concept. We will use the
CUSTOMERS table stored in our database as:

Select * from customers;

+----+----------+-----+-----------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+-----------+----------+
| 1 | Ramesh | 32 | Ahmedabad | 2000.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
+----+----------+-----+-----------+----------+
DECLARE
CURSOR c_customers is
select name from customers;

TYPE c_list IS TABLE of customers.name%type INDEX BY


binary_integer;
name_list c_list;
counter integer :=0;
BEGIN
FOR n IN c_customers LOOP
counter := counter +1;
name_list(counter) := n.name;
dbms_output.put_line('Customer('||counter||
'):'||name_list(counter));
END LOOP;
END;
/

When the above code is executed at SQL prompt, it produces the following result:

Customer(1): Ramesh
Customer(2): Khilan
Customer(3): kaushik
Customer(4): Chaitali
Customer(5): Hardik
Customer(6): Komal
PL/SQL procedure successfully completed

Nested Tables
A nested table is like a one-dimensional array with an arbitrary number of elements. However,
a nested table differs from an array in the following aspects:
 An array has a declared number of elements, but a nested table does not. The size of a nested
table can increase dynamically.

 An array is always dense, i.e., it always has consecutive subscripts. A nested array is dense
initially, but it can become sparse when elements are deleted from it.

A nested table is created using the following syntax:

TYPE type_name IS TABLE OF element_type [NOT NULL];

table_name type_name;

This declaration is similar to declaration of an index-by table, but there is no INDEX BY


clause.
A nested table can be stored in a database column and so it could be used for simplifying SQL
operations where you join a single-column table with a larger table. An associative array cannot
be stored in the database.

Example:
The following examples illustrate the use of nested table:

DECLARE
TYPE names_table IS TABLE OF VARCHAR2(10);
TYPE grades IS TABLE OF INTEGER;

names names_table;
marks grades;
total integer;
BEGIN
names := names_table('Kavita', 'Pritam', 'Ayan', 'Rishav', 'Aziz');
marks:= grades(98, 97, 78, 87, 92);
total := names.count;
dbms_output.put_line('Total '|| total || ' Students');
FOR i IN 1 .. total LOOP
dbms_output.put_line('Student:'||names(i)||', Marks:' || marks(i));
end loop;
END;
/

When the above code is executed at SQL prompt, it produces the following result:

Total 5 Students
Student:Kavita, Marks:98
Student:Pritam, Marks:97
Student:Ayan, Marks:78
Student:Rishav, Marks:87
Student:Aziz, Marks:92
PL/SQL procedure successfully completed.

Example:
Elements of a nested table could also be a %ROWTYPE of any database table or %TYPE of any
database table field. The following example illustrates the concept. We will use the CUSTOMERS table
stored in our database as:

Select * from customers;

+----+----------+-----+-----------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+-----------+----------+
| 1 | Ramesh | 32 | Ahmedabad | 2000.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
+----+----------+-----+-----------+----------+
DECLARE
CURSOR c_customers is
SELECT name FROM customers;

TYPE c_list IS TABLE of customers.name%type;


name_list c_list := c_list();
counter integer :=0;
BEGIN
FOR n IN c_customers LOOP
counter := counter +1;
name_list.extend;
name_list(counter) := n.name;

dbms_output.put_line('Customer('||counter||'):'||name_list(counter));
END LOOP;
END;
/

When the above code is executed at SQL prompt, it produces the following result:

Customer(1): Ramesh
Customer(2): Khilan
Customer(3): kaushik
Customer(4): Chaitali
Customer(5): Hardik
Customer(6): Komal

PL/SQL procedure successfully completed.


Assignment: 4

Demonstration of any ETL tool.

Objective:

Reference:

Pre‐requisite:
Assignment: 5
Implement Apriori algorithm for association rule
Objective:
• To learn association rule for Apriori algorithm

Reference:
• Data Mining Introductory & Advanced Topic by Margaret H. Dunham
• Data Mining Concept and Technique By Han & Kamber

Pre‐requisite:
• Fundamental Knowledge of Database Management

Theory:
Association rule mining is to find out association rules that satisfy the predefined
minimum support and confidence from a given database. The problem is usually
decomposed into two sub problems.

 Find those item sets whose occurrences exceed a predefined threshold in the database;
those item sets are called frequent or large item sets.
 Generate association rules from those large item sets with the constraints of minimal
confidence.

Suppose one of the large item sets is Lk = {I1,I2,...,Ik}; association rules with this item sets
are generated in the following way: the first rule is {I1,I2,...,Ik − 1} = > {Ik}. By checking
the confidence this rule can be determined as interesting or not. Then, other rules are
generated by deleting the last items in the antecedent and inserting it to the consequent,
further the confidences of the new rules are checked to determine the interestingness of
them. This process iterates until the antecedent becomes empty.

Since the second sub problem is quite straight forward, most of the research focuses on
the first sub problem. The Apriori algorithm finds the frequent sets L in Database D.
 Find frequent set Lk − 1.
 Join Step.

 Ck is generated by joining Lk − 1with itself


Prune Step.
 Any (k − 1) ‐itemset that is not frequent cannot be a subset of a frequent k ‐
itemset, hence should be removed.

where
 (Ck: Candidate itemset of size k)
 (Lk: frequent itemset of size k)

Input :
A large supermarket tracks sales data by SKU( Stoke Keeping Unit) (item), and thus is
able to know what items are typically purchased together. Apriori is a moderately
efficient way to build a list of frequent purchased item pairs from this data.

Let the database of transactions consist of the sets {1,2,3,4}, {2,3,4}, {2,3}, {1,2,4}, {1,2,3,4},
and {2,4}.
Output
Each number corresponds to a product such as "butter" or "water". The first step of
Apriori to count up the frequencies, called the supports, of each member item separately:

Item Support
1 3
2 6
3 4
4 5

We can define a minimum support level to qualify as "frequent," which depends on the
context. For this case, let min support = 3. Therefore, all are frequent. The next step is to
generate a list of all 2-pairs of the frequent items. Had any of the above items not been
frequent, they wouldn't have been included as a possible member of possible 2-item pairs.
In this way, Apriori prunes the tree of all possible sets.

Item Support
{1,2} 3
{1,3} 2
{1,4} 3
{2,3} 4
{2,4} 5
{3,4} 3

This is counting up the occurrences of each of those pairs in the database. Since
minsup=3, we don't need to generate 3-sets involving {1,3}. This is because since they're
not frequent, no supersets of them can possibly be frequent. Keep going:

Item Support
{1,2,4} 3
{2,3,4} 3

Post lab assignment:


1. Give an example for Apriori with transaction and explain Apriori-gen-algorithm
Assignment: 6
Bayesian Classification
Objective:

• To implement classification using Bayes theorm.

Reference:
• Data Mining Introductory & Advanced Topic by Margaret H. Dunham
• Data Mining Concept and Technique By Han & Kamber

Pre‐requisite:
• Fundamental Knowledge of probability and Bayes theorm

Theory:

The simple baysian classification assumes that the effect of an attribute value of a given
class membership is independent of other attribute.

The Bayes theorm is as follows –


Let X be an unknown sample.
Let it be hypothesis such that X belongs to particular class C.
We need to determine P(H/X).

The probability that hypothesis it holds is given that all values of X are observed.

P(H/X) = P(X/H).P(H)/P(X)

In this program, we initially take the number of tuples in training data set in variable L.
The string array’s name, gender, hight, output to store the details and output respectfully.
Therefore, the tuple details are taken from user using ‘for’ loops.

Bayesian classification has an expected classification. Now using the counter variables
for various attributes i.e. (male/female) for gender and (short/medium/tall) for hight.

The tuples are scanned and the respective counter is incremented accordingly using ifelse-
if structure.

Therefore variables pshort, pmed, plong are used to convert the counter variables to
corresponding values.

Algorithm –
1. START
2. Store the training data set
3. Specify ranges for classifying the data
4. Calculate the probability of being tall, medium, short
5. Also, calculate the probabilities of tall, short, medium according to gender and
Classification ranges
6. Calculate the likelihood of short, medium and tall
7. Calculate P(t) by summing up of probable likelihood
8. Calculate actual probabilities
Input:

Training data set

Name Gender Height Output


Christina F 1.6m Short
Jim M 1.9m Tall
Maggie F 1.9m Medium
Martha F 1.88m Medium
Stephony F 1.7m Medium
Bob M 1.85m Short
Dave M 1.7m Short
Steven M 2.1m Tall
Amey F 1.8m Medium

Output
The tuple belongs to the class having highest probability. Thus new tuple is classified.

Post lab assignment:


Assignment: 7
Implement K‐means algorithm for clustering
Objective:
• To learn K‐means algorithm for clustering

Reference:
• Data Mining Introductory & Advanced Topic by Margaret H. Dunham
• Data Mining Concept and Technique By Han & Kamber

Pre‐requisite:
• Fundamental Knowledge of Database Management

Theory:

In statistics and machine learning, k‐means clustering is a method of cluster analysis which
aims
to partition n observations into k clusters in which each observation belongs to the cluster with
the nearest mean.‐Mean Clustering algorithm works?
Here is step by step k means clustering algorithm:

Step 1. Begin with a decision on the value of k = number of clusters

Step 2. Put any initial partition that classifies the data into k clusters. You may assign the
training samples randomly, or systematically as the following:

1. Take the first k training sample as single‐element clusters

Assign each of the remaining (N-k) training sample to the cluster with the nearest
centroid. After each assignment, recomputed the centroid of the gaining cluster.

Step 3. Take each sample in sequence and compute its distance from the centroid of each
of the clusters. If a sample is not currently in the cluster with the closest centroid, switch
this sample to that cluster and update the centroid of the cluster gaining the new sample
and the cluster losing the sample.

Step 4. Repeat step 3 until coverage is achieved, that is until a pass through the training sample
cause no new assignment

Note: You can implement above problem no 3 to 6 in C/C++/JAVA


Assignment: 8

A case study of Business Intelligence in Government


sector/Social Networking/Business.
SET-II

Sr. Title Page


No. No.

SET-II

1 Create data-set in arff file format. Demonstration of preprocessing


on WEKA data-set.

2 Demonstration of Association rule process on data-set contact


lenses.arff /supermarket using apriori algorithm.

3 Demonstration of classification rule process on WEKA data-set


using j48 algorithm.

4 Demonstration of classification rule process on WEKA data-set


using id3 algorithm.

5 Demonstration of classification rule process on WEKA data-set


using naive bayes algorithm.

6 Demonstration of clustering rule process on data-set iris.arff


using simple k-means.
Assignment: 1

Demonstration of preprocessing on dataset student.arff

Objective
To learn to use the Weak machine learning toolkit

References
Witten, Ian and Eibe, Frank. Data Mining: Practical Machine Learning Tools and Techniques.
Springer.

Requirements
How do you load Weka?
1. What options are available on main panel?
2. What is the purpose of the the following in Weka:

1. The Explorer
2. The Knowledge Flow interface
3. The Experimenter
4. The command‐line interface
5. Describe the arff file format.

Steps of execution:

Step1: Loading the data. We can load the dataset into weka by clicking on open button in
preprocessing interface and selecting the appropriate file.

Step2: Once the data is loaded, weka will recognize the attributes and during the scan of the
data weka will compute some basic strategies on each attribute. The left panel in the above
figure shows the list of recognized attributes while the top panel indicates the names of the
base relation or table and the current working relation (which are same initially).

Step3: Clicking on an attribute in the left panel will show the basic statistics on the attributes
for the categorical attributes the frequency of each attribute value is shown, while for
continuous attributes we can obtain min, max, mean, standard deviation and deviation etc.,

Step4: The visualization in the right button panel in the form of cross-tabulation across two
attributes.

Note: we can select another attribute using the dropdown list

Step5: Selecting or filtering attributes

Removing an attribute- When we need to remove an attribute, we can do this by using the
attribute filters in weka. In the filter model panel, click on choose button, This will show a
popup window with a list of available filters.

Scroll down the list and select the “weka filters unsupervised Attribute remove”
filters.
Step 6: a) Next click the textbox immediately to the right of the choose button. In the
resulting dialog box enter the index of the attribute to be filtered out.

b) Make sure that invert selection option is set to false. The click OK now in the filter box
you will see “Remove-R-7”.

c) Click the apply button to apply filter to this data. This will remove the attribute and create
new working relation.

d) Save the new working relation as an arff file by clicking save button on the top (button)
panel(student.arff)

Dataset student .arff

@relation student

@attribute age {<30,30-40,>40}

@attribute income {low, medium, high}

@attribute student {yes, no}

@attribute credit-rating {fair, excellent}

@attribute buyspc {yes, no}

@data

<30, high, no, fair, no

<30, high, no, excellent, no

30-40, high, no, fair, yes

>40, medium, no, fair, yes

>40, low, yes, fair, yes

>40, low, yes, excellent, no

30-40, low, yes, excellent, yes

<30, medium, no, fair, no

<30, low, yes, fair, no

>40, medium, yes, fair, yes


<30, medium, yes, excellent, yes

30-40, medium, no, excellent, yes

30-40, high, yes, fair, yes

>40, medium, no, excellent, no

%
Assignment: 2

Demonstration of preprocessing on dataset labor.arff


Objective
To learn to use the Weak machine learning toolkit

References
Witten, Ian and Eibe, Frank. Data Mining: Practical Machine Learning Tools and Techniques.
Springer.

Execution steps

Step1: Loading the data. We can load the dataset into weka by clicking on open button in
preprocessing interface and selecting the appropriate file.

Step2: Once the data is loaded, weka will recognize the attributes and during the scan of the
data weka will compute some basic strategies on each attribute. The left panel in the above
figure shows the list of recognized attributes while the top panel indicates the names of the
base relation or table and the current working relation (which are same initially).

Step3: Clicking on an attribute in the left panel will show the basic statistics on the attributes
for the categorical attributes the frequency of each attribute value is shown, while for
continuous attributes we can obtain min, max, mean, standard deviation and deviation etc.,

Step4: The visualization in the right button panel in the form of cross-tabulation across two
attributes.

Note: we can select another attribute using the dropdown list

Step5: Selecting or filtering attributes

Removing an attribute- When we need to remove an attribute, we can do this by using the
attribute filters in weka. In the filter model panel, click on choose button, This will show a
popup window with a list of available filters.
Scroll down the list and select the “weka filters unsupervised attribute remove” filters.

Step 6: a) Next click the textbox immediately to the right of the choose button. In the
resulting dialog box enter the index of the attribute to be filtered out.

b) Make sure that invert selection option is set to false. The click OK now in the filter box.
you will see “Remove-R-7”.

c) Click the apply button to apply filter to this data. This will remove the attribute and create
new working relation.

d) Save the new working relation as an arff file by clicking save button on the
top(button)panel.(labor.arff)
Assignment: 3

Demonstration of Association rule process on dataset


contactlenses.arff using apriori algorithm
Objective
To learn to use the Weak machine learning toolkit

References
Witten, Ian and Eibe, Frank. Data Mining: Practical Machine Learning Tools and Techniques.
Springer.

Execution steps

Step1: Open the data file in Weka Explorer. It is presumed that the required data fields
have been discretized. In this example it is age attribute.

Step2: Clicking on the associate tab will bring up the interface for association rule

algorithm.

Step3: We will use apriori algorithm. This is the default algorithm.

Step4: Inorder to change the parameters for the run (example support, confidence etc)
we click on the text box immediately to the right of the choose button.

Dataset contactlenses.arff
Dataset test.arff

@relation test

@attribute admissionyear {2005,2006,2007,2008,2009,2010}

@attribute course {cse,mech,it,ece}

@data

2005, cse

2005, it

2005, cse

2006, mech

2006, it

2006, ece

2007, it

2007, cse

2008, it

2008, cse

2009, it

2009, ece

%
The following screenshot shows the association rules that were generated when
apriori algorithm is applied on the given dataset
Assignment: 4

Demonstration of classification rule process on dataset


student.arff using j48 algorithm
Objective
To learn to use the Weak machine learning toolkit

References
Witten, Ian and Eibe, Frank. Data Mining: Practical Machine Learning Tools and Techniques.
Springer.

Steps involved in this experiment:

Step-1: We begin the experiment by loading the data (student.arff)into


weka.

Step2: Next we select the “classify” tab and click “choose” button t o select the

“j48”classifier.

Step3: Now we specify the various parameters. These can be specified by clicking in the text
box to the right of the chose button. In this example, we accept the default values. The
default version does perform some pruning but does not perform error pruning.

Step4: Under the “text” options in the main panel. We select the 10-fold cross validation as
our evaluation approach. Since we don’t have separate evaluation data set, this is necessary to
get a reasonable idea of accuracy of generated model.

Step-5: We now click ”start” to generate the model .the Ascii version of the tree as well as
evaluation statistic will appear in the right panel when the model construction is complete.

Step-6: Note that the classification accuracy of model is about 69%.this indicates that we may
find more work. (Either in preprocessing or in selecting current parameters for the
classification)

Step-7: Now weka also lets us a view a graphical version of the classification tree. This can
be done by right clicking the last result set and selecting “visualize tree” from the pop-up
menu.

Step-8: We will use our model to classify the new


instances.

Step-9: In the main panel under “text” options click the “supplied test set” radio button and
then click the “set” button. This wills pop-up a window which will allow you to open the file
containing test instances.
Dataset student .arff

@relation student
@attribute age {<30,30-40,>40}
@attribute income {low, medium, high}
@attribute student {yes, no}
@attribute credit-rating {fair, excellent}
@attribute buyspc {yes, no}
@data
%
<30, high, no, fair, no
<30, high, no, excellent, no
30-40, high, no, fair, yes
>40, medium, no, fair, yes
>40, low, yes, fair, yes
>40, low, yes, excellent, no
30-40, low, yes, excellent, yes
<30, medium, no, fair, no
<30, low, yes, fair, no
>40, medium, yes, fair, yes
<30, medium, yes, excellent, yes
30-40, medium, no, excellent, yes
30-40, high, yes, fair, yes
>40, medium, no, excellent, no
%

The following screenshot shows the classification rules that were generated when
j48 algorithm is applied on the given dataset.
Assignment: 5

Demonstration of clustering rule process on data-set


iris.arff using simple k-means.
Objective
To learn to use the Weak machine learning toolkit

References
Witten, Ian and Eibe, Frank. Data Mining: Practical Machine Learning Tools and Techniques.
Springer.

Execution steps

Step 1: Run the Weka explorer and load the data file iris.arff in preprocessing
interface.

Step 2: Inorder to perform clustering select the ‘cluster’ tab in the explorer and click on the
choose button. This step results in a dropdown list of available clustering algorithms.

Step 3: In this case we select ‘simple k-means’.

Step 4: Next click in text button to the right of the choose button to get popup window shown
in the screenshots. In this window we enter six on the number of clusters and we leave the
value of the seed on as it is. The seed value is used in generating a random number which is
used for making the internal assignments of instances of clusters.

Step 5: Once of the option have been specified. We run the clustering algorithm there we
must make sure that they are in the ‘cluster mode’ panel. The use of training set option is
selected and then we click ‘start’ button. This process and resulting window are shown in the
following screenshots.

Step 6: The result window shows the centroid of each cluster as well as statistics on the
number and the percent of instances assigned to different clusters. Here clusters centroid are
means vectors for each clusters. These clusters can be used to characterized the cluster. For
eg, the centroid of cluster1 shows the class iris.versicolor mean value of the sepal
length is 5.4706, sepal width 2.4765, petal width 1.1294, petal length 3.7941.
Step 7: Another way of understanding characteristics of each cluster through visualization, we
can do this, try right clicking the result set on the result. List panel and selecting the visualize
cluster
The following screenshot shows the clustering rules that were generated when simple k
means algorithm is applied on the given dataset

Step 8: We can assure that resulting dataset which included each instance along with its
assign cluster. To do so we click the save button in the visualization window and save the
result iris k-mean .The top portion of this file is shown in the following figure

You might also like