Ashin ML Record - Merged
Ashin ML Record - Merged
Ashin ML Record - Merged
Department of Artificial
Intelligence and Data Science
Name :
Register Number :
Regulation : R-2021
NAME :
UNIVERSITY REG NO :
PAGE SIGN OF
EX.NO DATE NAME OF THE EXPERIMENT NO MARKS FACULTY
1 CANDIDATE ELIMINATION
ALGORITHM
2 ID3 ALGORITHM
3 BACK PROPAGATION
ALGORITHM
6 BAYESIAN NETWORK
EM ALGORITHM (K MEANS
7 ALGORITHM)
8 K- NEAREST NEIGHBOUR
ALGORITHM
LOCALLY WEIGHTED
REGRESSION ALGORITHM
9
EX NO: 01 IMPLEMENTATION OF CANDIDATE - ELIMINATION ALGORITHAM
DATE:
AIM:
To implement and demonstrate the Candidate-Elimination algorithm, for a given setof training
data examples stored in a .CSV file, to output a description of the set of all hypotheses consistent with the
training examples.
ALGORITHM:
if attribute_value == hypothesis_value:
Do nothing else:
SOURCE CODE:
specific_h = concepts.iloc[0].copy()
columns=specific_h.index)print(general_h)
if target[i] == "No":
general_h.iloc[x, x] = specific_h[x]else:
general_h.iloc[x, x] = '?'
print(general_h)
print("\nFinal General_h:")print(g_final)
GitHub
Experience the power of prediction through the
→ candidate elimination algorithm. Explore the code at Git
Hub for seamless integration and advanced forecasting
capabilities.
Explanation
Video Explore concept learning with ease. Our app happiness the
→ → power of the candidate elimination algorithm to iteratively
refine hypotheses based on data. Watch how it works
Blog Unlock the prediction with our latest blog post. Dive into the
→ depth of the candidate elimination algorithm and discover
how it enhances forecasting accuracy
→
Thus, the Candidate-Elimination algorithm, to test all the hypotheses with the training sets using
python was executed and verified successfully.
DATE:
AIM:
To build Decision tree in ID3 algorithm to classify a new sample using python.
ALGORITHM:
Step 1: Observe the dataset. Import the necessary basic python libraries.
Step 6: Finding the most informative feature (feature with highest information gain).
return Node(result=data[target_name].iloc[0])
information_gains = [(feature, self.information_gain(data, feature, target_name)) forfeature in features]
best_feature, _ = max(information_gains, key=lambda x: x[1])root = Node(feature=best_feature)
for value in data[best_feature].unique(): subset = data[data[best_feature] == value]
root.children[value] = self.build_tree(subset, [f for f in features if f != best_feature],target_name)
return root
def fit(self, data, target_name):
features = [col for col in data.columns if col != target_name] self.root = self.build_tree(data, features,
target_name)
data = pd.read_csv(uploaded_file)
# Create DataFrame
df = pd.DataFrame(data)
# Initialize the DecisionTreeID3 modelmodel = DecisionTreeID3()
# Train the model model.fit(df, 'PlayTennis')
# Make predictions
predictions = model.predict(df)
st.write("Predictions:", predictions)
if _name_ == "_main_":main()
GitHub
Experience the power of prediction through the ID3 algorithm.
→ Explore the code at Git Hub for seamless integration and
advanced forecasting capabilities.
Explanation
Video Explore concept learning with ease. Our app happiness the
→ power of the ID3 algorithm to iteratively refine hypotheses
based on data. Watch how it works
App
Predict weather patterns seamlessly with our streamlit app
→ leveraging the ID3 algorithm. Witness dynamic hypothesis
refinement of accurate forecast.
Blog
Unlock the prediction with our latest blog post. Dive into the
→
depth of the ID3 algorithm and discover how it enhances
→ forecasting accuracy
Thus, the program to implement decision tree based ID3 algorithm using pythonwas executed
and verified successfully.
DATE:
AIM:
To implement the Back Propagation algorithm to build an Artificial Neural Network.
ALGORITHM:
Step 2: Input is modeled using real weights W. The weights are usually randomly selected.
Step 3: Calculate the output for every neuron from the input layer, to the hidden layers,to the output layer.
Step 4: Calculate the error in the outputs
Step 5: Travel back from the output layer to the hidden layer to adjust the weights such that the errors are
decreased. Keep repeating the process until the desired output isachieved.
SOURCE CODE:
Initialize a network
network = list()
new_inputs.append(neuron['output'])inputs = new_inputs
return inputs
# Calculate the derivative of a neuron outputdef transfer_derivative(output):
if i != len(network)-1:
for j in range(len(layer)):
error = 0.0
else:
for j in range(len(layer)):
for j in range(len(layer)):
neuron = layer[j]
GitHub
Experience the power of prediction through the
→ backpropagation algorithm. Explore the code at Git Hub
for seamless integration and advanced forecasting
capabilities.
Explanation
Video Explore concept learning with ease. Our app happiness the
→ power of the backpropagation algorithm to iteratively
refine hypotheses based on data. Watch how it works
App
Blog
Unlock the prediction with our latest blog post. Dive into the
depth of the backpropagation algorithm and discover how it
→ enhances forecasting accuracy
Thus, the Back propagation algorithm to build an Artificial Neural network was implemented
successfully.
DATE:
AIM:
To implement Naïve Bayesian classifier for Tennis data set and to compute theaccuracy with few datasets.
ALGORITHM:
SOURCE CODE:
}
def predict(self, X):posteriors = [] for x in X:
posteriors.append([self._posterior(x, c) for c in self.classes])return self.classes[np.argmax(posteriors, axis=1)]
def _posterior(self, x, c):
y = data.iloc[:, -1]
# Convert categorical data to numericalfor col in X.columns:
X[col] = X[col].astype('category').cat.codesy = y.astype('category').cat.codes
# Split data into train and test setssplit_ratio = 0.8
indices = np.random.permutation(len(X))train_size = int(len(X) * split_ratio)
train_idx, test_idx = indices[:train_size], indices[train_size:] X_train, X_test = X.iloc[train_idx],
X.iloc[test_idx]
y_train, y_test = y.iloc[train_idx], y.iloc[test_idx]
# Train classifier
GitHub
Experience the power of prediction through the naïve
→ Bayesian classifier(dataset). Explore the code at Git Hub
for seamless integration and advanced forecasting
capabilities.
Explanation
Video
Explore concept learning with ease. Our app happiness the
power of the naïve Bayesian classifier(dataset) to
→ iteratively refine hypotheses based on data. Watch how it
works
App
Blog
Unlock the prediction with our latest blog post. Dive into
the depth of the naïve Bayesian classifier(dataset) and
→
discover how it enhances forecasting accuracy
Thus, the program to implement Naïve Bayesian classifier to compute the accuracywith few datasets using
python was executed and verified successfully.
AIM:
To classify a set of documents using Naïve Bayesian classifier and to measure theaccuracy and precision.
ALGORITHM:
SOURCE CODE:
X = msg.messagey = msg.labelnum
for doc, p in zip(Xtest, pred): # Changed from Xtrain to Xtestp = 'pos' if p == 1 else 'neg'
st.write(f"{doc} -> {p}")
# Accuracy Metrics
def accuracy_score(y_true, y_pred):return np.mean(y_true == y_pred)
return cm
GitHub
Unlock the prediction with our latest blog post. Dive into the
→ depth of the naïve Bayesian classifier model and discover
how it enhances forecasting accuracy
Explanation
Video
→ Explore concept learning with ease. Our app happiness the
→ power of the naïve Bayesian classifier model to iteratively
refine hypotheses based on data. Watch how it works
App
Blog
Unlock the prediction with our latest blog post. Dive into the
→ → depth of the naïve Bayesian classifier model and discover
how it enhances forecasting accuracy
Thus, the accuracy and precision were measured by Naïve Bayesian classifier model.
DATE:
AIM:
To construct a Bayesian network to diagnose corona infection using WHO data set.
ALGORITHM:
SOURCE CODE:
if st.button('Diagnose'):
diagnosis = bayesian_network.predict(symptoms)
# Instantiate Bayesian Network bayesian_network = BayesianNetwork(data)
if st.button('Diagnose'):
diagnosis = bayesian_network.predict(symptoms)
st.write(f'The diagnosis based on the symptoms is: {"Positive" if diagnosis == 1 else"Negative"}')
else:
st.write("Please upload a CSV file.")
GitHub
Unlock the prediction with our latest blog post. Dive into
→ the depth of the Bayesian network and discover how it
enhances forecasting accuracy
Explanation
Video
App
Predict weather patterns seamlessly with our streamlit app
→ leveraging the Bayesian network. Witness dynamic
hypothesis refinement of accurate forecast.
Blog
Unlock the prediction with our latest blog post. Dive into
→ the depth of the Bayesian network and discover how it
enhances forecasting accuracy
Thus, the program to diagnose corona infection using Bayesian network was successfully implemented
using python.
DATE:
AIM:
To compare the clustering in EM algorithm and K-mean algorithm using the samedata sets.
ALGORITHM:
Step 1: Expectation step (E - step): It involves the estimation (guess) of all missing valuesin the dataset so
that after completing this step, there should not be any missing value.
Step 2: Maximization step (M - step): This step involves the use of estimated data in the E-step and updating
the parameters.
Step 3: Repeat E-step and M-step until the convergence of the values occurs.
SOURCE CODE:
scaler = preprocessing.StandardScaler()scaler.fit(X)
xsa = scaler.transform(X)
Github
Explanation
Video
App
Blog
Unlock the prediction with our latest blog post. Dive into the
→ → depth of the EM algorithm and K- mean algorithm and
discover how it enhances forecasting accuracy
DATE:
AIM:
To implement K-Nearest Neighbor algorithm to classify iris data set.
ALGORITHM:
Step 2: Considering that all rows don’t belong to the same class, split the dataset S into subsetsusing the
feature for which the Information Gain is maximum.
Step 3: Make a decision tree node using the feature with the maximum Information gain.
Step 4: If all rows belong to the same class, make the current node as a leaf node with the classas its label.
Step 5: Repeat for the remaining features until we run out of all features, or the decision treehas all leaf
nodes.
SOURCE CODE:
])
'feature_names': ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
}
# Streamlit code to display the datasetst.title("Sample of the Iris dataset")
for i in range(5): st.write(f"Sample {i+1}:")
st.write(" Features:", iris_dataset['data'][i]) st.write(" Target:", iris_dataset['target'][i], "(",
iris_dataset['target_names'][iris_dataset['target'][i]], ")")st.write("")
Github
Experience the power of prediction through the K-nearest
→ neighbor algorithm. Explore the code at Git Hub for seamless
integration and advanced forecasting capabilities.
Explanation
Video
App
Blog
Unlock the prediction with our latest blog post. Dive into the
→ → depth of the K-nearest neighbor algorithm and discover how it
enhances forecasting accuracy
Thus, the program for K-Nearest Neighbor algorithm was implemented successfullyusing an iris data set.
DATE:
AIM:
To implement the non-parametric Locally Weighted Regression algorithm in order tofit data points.
ALGORITHM:
Step 1: Read the Given Data Sample to X and the curve (linear or nonlinear) to Y
Step 2: Set the value for Smoothening parameter or Free parameter say τ
SOURCE CODE:
import numpy as np
y0 : float
X = X.flatten() # Ensure X is 1D
st.subheader("Generated Dataset")data = np.vstack((X, Y)).T
st.write(pd.DataFrame(data, columns=["Feature", "Target"]))
OUTPUT:
Github
Experience the power of prediction through the non-parametric
→ Locally Weighted Regression algorithm. Explore the code at Git
Hub for seamless integration and advanced forecasting
capabilities.
Explanation
Video
→ Explore concept learning with ease. Our app happiness the power
of the non-parametric Locally Weighted Regression algorithm to
→ iteratively refine hypotheses based on data. Watch how it works.
App
Predict weather patterns seamlessly with our streamlit app
→
leveraging the non-parametric Locally Weighted Regression
algorithm. Witness dynamic hypothesis refinement of accurate
forecast.
Blog
Unlock the prediction with our latest blog post. Dive into the
→ depth of the non-parametric Locally Weighted Regression
algorithm and discover how it enhances forecasting accuracy