Machine Learning Lab Manual (15CSL76)
Machine Learning Lab Manual (15CSL76)
Machine Learning Lab Manual (15CSL76)
MACHINE LEARNING
LABORATORY MANUAL
15CSL76
Department of Information Science & Engineering
Sahyadri College of Engineering & Management
Mangaluru
‘
1
PREFACE
This Machine Learning Laboratory Manual is designed for Information Science & Engineering
students, Sahyadri College of Engineering and Management, Mangaluru.
Machine Learning Algorithms have gained wide application in many domains. Therefore, this
laboratory manual is prepared with the intention of providing a learning opportunity for the
students to understand the basics of programming Machine Learning Algorithms in Python
The lab experiments enhance the learning curve for the students and give them hands-on
experience on the different concepts that were discussed in the theory classes.
The manual contains the prescribed experiments for easy & quick understanding of the
students. The authors have gathered materials from Books, Journals and Web resources.
We hope that this practical manual will be helpful for the students of Information Science &
Engineering for understanding the subject from the point of view of applied aspects.
There is always scope for improvement in the manual. We would appreciate to receive valuable
suggestions from readers and users for future use.
This Laboratory manual is based on the syllabus provided by VTU under the subject code
15CSL76.
2
ACKNOWLEDGMENTS
We are highly indebted to Dr. Shamanth Rai. Professor & H.O.D. Department of Information
Science & Engineering for providing us the opportunity to prepare this manual as well as for
his guidance and constant supervision.
We would like to express our special gratitude and thanks to Dr. Srinivas Rao Kunte,
Principal Sahyadri College of Engineering & Management, Mangaluru.
We are also thankful to Dr. D. L. Prabhakara, Director Sahyadri Group of Institutions, and
Dr. Manjunath Bhandary, Chairman, Sahyadri Group of Institutions.
We are also thankful to Mr. Praveen Damle for proofreading and constant support in the
Laboratories.
We are grateful to Mr. Gautham G. Rao (4SF15IS028) for all the support provided in
development of the code.
We are thankful to the students and colleagues for their support and encouragement.
Praahas Amin
Madhura N. Hegde
3
Write a program to implement the Naïve Bayes classifier for a sample training
5 data set stored as a .CSV file. Compute the accuracy of the classifier, 13
considering few test data sets.
Assuming a set of documents that need to be classified, use the naïve Bayesian
6 Classifier model to perform this task. Built-in Java classes/API can be used to 16
write the program. Calculate the accuracy, precision, and recall for your data set.
Write a program to construct a Bayesian network considering medical data. Use
7 this model to demonstrate the diagnosis of heart patients using standard Heart 19
Disease Data Set. You can use Java/Python ML library classes/API.
Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same
8 data set for clustering using k-Means algorithm. Compare the results of these two 23
algorithms and comment on the quality of clustering. You can add Java/Python
ML library classes/API in the program.
Write a program to implement k-Nearest Neighbour algorithm to classify the iris
9 data set. Print both correct and wrong predictions. Java/Python ML library 26
classes can be used for this problem.
Implement the non-parametric Locally Weighted Regression algorithm in order
10 to fit data points. Select appropriate data set for your experiment and draw 28
graphs.
All laboratory experiments are to be included for practical examination.
• Students are allowed to pick one experiment from the lot.
• Strictly follow the instructions as printed on the cover page of answer script
• Marks distribution: Procedure + Conduction + Viva:20 + 50 +10 (80)
Change of experiment is allowed only once and marks allotted to the procedure part to be made 0.
Subject Code: 15CSL76 | IA Marks: 20 | Exam Marks: 80 | Exam Hours: 03 | Credits: 02
4
Output:
5
Dataset:
Source: https://github.com/praahas/machine-learning-vtu
6
Output:
7
Dataset:
Source: https://github.com/ggrao1/Candidate-Elimination
8
countC = {}
for cVal in conceptVals:
dataCC = data[data[concept] = = cVal]
countC[cVal] = dataCC.shape[0]
if countC[conceptVals[0]] = = 0:
tree = insertConcept(tree, addTo, conceptVals[1])
return tree
if countC[conceptVals[1]] = = 0:
tree = insertConcept(tree, addTo, conceptVals[0])
return tree
ClassEntropy = infoGain(countC[conceptVals[1]],countC[conceptVals[0]])
Attr = {}
for a in AttributeList:
Attr[a] = list(set(data[a]))
AttrCount = {}
EntropyAttr = {}
9
Gain = {}
for g in EntropyAttr:
Gain[g] = ClassEntropy - EntropyAttr[g]
def main():
from pandas import DataFrame
data = DataFrame.from_csv('PlayTennis.csv')
print(data)
AttributeList = list(data)[:-1]
concept = str(list(data)[-1])
conceptVals = list(set(data[concept]))
tree = getNextNode(data, AttributeList, concept, conceptVals, {'root':'None'}, 'root')
print(tree)
main()
Output:
10
Dataset:
Source: https://github.com/ggrao1/decision-tree
11
def derivatives_sigmoid(x):
return x * (1 - x)
epoch=7000
learning_rate=0.1
inputlayer_neurons = 2
hiddenlayer_neurons = 3
output_neurons = 1
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wo=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
bo=np.random.uniform(size=(1,output_neurons))
for i in range(epoch):
net_h=np.dot(X,wh) + bh
sigma_h= sigmoid(net_h)
net_o= np.dot(sigma_h,wo)+ bo
output = sigmoid(net_o)
deltaK = (y-output)* derivatives_sigmoid(output)
deltaH = deltaK.dot(wo.T) * derivatives_sigmoid(sigma_h)
wo = wo + sigma_h.T.dot(deltaK) *learning_rate
wh = wh + X.T.dot(deltaH) *learning_rate
Output:
Source: https://github.com/praahas/machine-learning-vtu
13
def train(data,Attr,conceptVals,concept):
conceptProbs = {}
countConcept={}
for cVal in conceptVals:
countConcept[cVal],conceptProbs[cVal] = probAttr(data,concept,cVal)
AttrConcept = {}
probability_list = {}
for att in Attr: #Create a tree for attribute
AttrConcept[att] = {}
probability_list[att] = {}
for val in Attr[att]:
AttrConcept[att][val] = {}
a,probability_list[att][val] = probAttr(data,att,val)
for cVal in conceptVals:
dataTemp = data[data[att]==val]
AttrConcept[att][val][cVal] = len(dataTemp[dataTemp[concept] == cVal])/countConcept[cVal]
print("P(A) : ",conceptProbs,"\n")
print("P(X/A) : ",AttrConcept,"\n")
print("P(X) : ",probability_list,"\n")
return conceptProbs,AttrConcept,probability_list
def test(examples,Attr,concept_list,conceptProbs,AttrConcept,probability_list):
misclassification_count=0
Total = len(examples)
for ex in examples:
px={}
for a in Attr:
for x in ex:
for c in concept_list:
if x in AttrConcept[a]:
if c not in px:
px[c] = conceptProbs[c]*AttrConcept[a][x][c]/probability_list[a][x]
else:
px[c] = px[c]*AttrConcept[a][x][c]/probability_list[a][x]
print(px)
classification = max(px,key=px.get)
print("Classification :",classification,"Expected :",ex[-1])
if(classification!=ex[-1]):
misclassification_count+=1
misclassification_rate=misclassification_count*100/Total
accuracy=100-misclassification_rate
14
print("Misclassification Count={}".format(misclassification_count))
print("Misclassification Rate={}%".format(misclassification_rate))
print("Accuracy={}%".format(accuracy))
def main():
import pandas as pd
from pandas import DataFrame
data = DataFrame.from_csv('PlayTennis_train1.csv')
concept=str(list(data)[-1])
concept_list = set(data[concept])
Attr={}
for a in list(data)[:-1]:
Attr[a] = set(data[a])
conceptProbs,AttrConcept,probability_list = train(data,Attr,concept_list,concept)
examples = DataFrame.from_csv(PlayTennis_test1.csv')
test(examples.values,Attr,concept_list,conceptProbs,AttrConcept,probability_list)
main()
Output:
15
Dataset:
Training Set
Testing example
Source: https://github.com/ggrao1/NaiveBayes
16
Output:
Source: https://github.com/rumaan/machine-learning-lab-vtu
18
Dataset:
19
cancer_model.get_cpds()
print(cancer_model.get_cpds('Pollution'))
print(cancer_model.get_cpds('Smoker'))
print(cancer_model.get_cpds('Xray'))
print(cancer_model.get_cpds('Dyspnoea'))
print(cancer_model.get_cpds('Cancer'))
cancer_model.local_independencies('Xray')
cancer_model.local_independencies('Pollution')
cancer_model.local_independencies('Smoker')
cancer_model.local_independencies('Dyspnoea')
cancer_model.local_independencies('Cancer')
cancer_model.get_independencies()
Output:
Inferencing:
21
Cleveland_data_URL = 'http://archive.ics.uci.edu/ml/machine-learning-databases/heart-
disease/processed.hungarian.data'
np.set_printoptions(threshold=np.nan) #see a whole array when we output it
names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal',
'heartdisease']
heartDisease = pd.read_csv(urlopen(Cleveland_data_URL), names = names) #gets Cleveland data
del heartDisease['ca']
del heartDisease['slope']
del heartDisease['thal']
del heartDisease['oldpeak']
print(model.get_cpds('age'))
print(model.get_cpds('chol'))
print(model.get_cpds('sex'))
model.get_independencies()
Output:
Diagnosis:
23
dataset=load_iris()
X=pd.DataFrame(dataset.data)
X.columns=['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width']
y=pd.DataFrame(dataset.target)
y.columns=['Targets']
plt.figure(figsize=(14,7))
colormap=np.array(['red','lime','black'])
#REAL PLOT
plt.subplot(1,3,1)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y.Targets],s=40)
plt.title('Real')
#KMeans -PLOT
plt.subplot(1,3,2)
model=KMeans(n_clusters=3)
model.fit(X)
predY=np.choose(model.labels_,[0,1,2]).astype(np.int64)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[predY],s=40)
plt.title('KMeans')
#GMM PLOT
scaler=preprocessing.StandardScaler()
scaler.fit(X)
xsa=scaler.transform(X)
xs=pd.DataFrame(xsa,columns=X.columns)
gmm=GaussianMixture(n_components=3)
gmm.fit(xs)
y_cluster_gmm=gmm.predict(xs)
plt.subplot(1,3,3)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y_cluster_gmm],s=40)
plt.title('GMM Classification')
24
Output
25
Dataset:
26
print("TARGET=",y_test[i],dataset["target_names"][y_test[i]],"PREDICTED=",prediction,dataset["target_
names"][prediction])
print(clf.score(X_test,y_test))
Output:
TARGET= 2 virginica PREDICTED= [2] ['virginica'] TARGET= 0 setosa PREDICTED= [0] ['setosa']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 2 virginica PREDICTED= [2] ['virginica']
TARGET= 0 setosa PREDICTED= [0] ['setosa'] TARGET= 1 versicolor PREDICTED= [1] ['versicolor']
TARGET= 2 virginica PREDICTED= [2] ['virginica'] TARGET= 0 setosa PREDICTED= [0] ['setosa']
TARGET= 0 setosa PREDICTED= [0] ['setosa'] TARGET= 0 setosa PREDICTED= [0] ['setosa']
TARGET= 2 virginica PREDICTED= [2] ['virginica'] TARGET= 2 virginica PREDICTED= [2] ['virginica']
TARGET= 0 setosa PREDICTED= [0] ['setosa'] TARGET= 0 setosa PREDICTED= [0] ['setosa']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 0 setosa PREDICTED= [0] ['setosa']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 1 versicolor PREDICTED= [1] ['versicolor']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 1 versicolor PREDICTED= [1] ['versicolor']
TARGET= 2 virginica PREDICTED= [2] ['virginica'] TARGET= 0 setosa PREDICTED= [0] ['setosa']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 2 virginica PREDICTED= [2] ['virginica']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 1 versicolor PREDICTED= [1] ['versicolor']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 0 setosa PREDICTED= [0] ['setosa']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 2 virginica PREDICTED= [2] ['virginica']
TARGET= 0 setosa PREDICTED= [0] ['setosa'] TARGET= 2 virginica PREDICTED= [2] ['virginica']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 1 versicolor PREDICTED= [1] ['versicolor']
TARGET= 1 versicolor PREDICTED= [1] ['versicolor'] TARGET= 0 setosa PREDICTED= [0] ['setosa']
TARGET= 0 setosa PREDICTED= [0] ['setosa'] TARGET= 1 versicolor PREDICTED= [2] ['virginica']
0.973684210526
27
Dataset:
Source: https://github.com/praahas/machine-learning-vtu
28
residuals = y - yest
s = np.median(np.abs(residuals))
delta = np.clip(residuals / (6.0 * s), -1, 1)
delta = (1 - delta ** 2) ** 2
return yest
def main():
import math
n = 100
x = np.linspace(0, 2 * math.pi, n)
y = np.sin(x) + 0.3 * np.random.randn(n)
f =0.25
iterations=3
yest = lowess(x, y, f, iterations)
main()
29
Output: