Machine Learning Algorithms

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 9

MACHINE LEARNING

ALGORITHMS
LIST OF ALGORITHMS
 Linear Regression
 Logistic Regression
 K-Nearest Neighbors (KNN)
 Support Vector Machines (SVM)
 Naive Bayes
LINEAR REGRESSION
 Linear Regression establishes the relationship between independent
and dependent variables by fitting the best line. This best-fit line is
known as the regression line and is represented by a linear equation

Y= a*X + b
In this equation:
 Y – Dependent Variable
 a – Slope
 X – Independent variable
 b – Intercept

 Example CODE Linear Regression.txt


LOGISTIC REGRESSION
Logistic Regression is used to estimate discrete values (usually Binary
values like 0/1, yes/no, true/false) from a set of independent variables.
It helps predict the probability of an event by fitting data to a logit
function. It is also called logit regression.

 Example CODE Logistic Regression.txt


DECISION TREE

 Decision Tree algorithm in machine learning is one of the most


popular algorithms in use today; this is a supervised learning
algorithm that is used for classifying problems.
 It works well in classifying both categorical and continuous
dependent variables.
 This algorithm divides the population into two or more homogeneous
sets based on the most significant attributes/ independent variables.

 Example code: Decision tree.txt


KNN (K- NEAREST NEIGHBORS)
This algorithm can be applied to both classification and regression problems. Apparently,
within the data science industry, it's more widely used to solve classification problems.
It’s a simple algorithm that stores all available cases and classifies any new cases by
taking a majority vote of its k neighbors. The case is then assigned to the class with
which it has the most in common. A distance function performs this measurement.
KNN can be easily understood by comparing it to real life. For example, if you want
information about a person, it makes sense to talk to his or her friends and colleagues.

Things to consider before selecting K Nearest Neighbours Algorithm: 


 KNN is computationally expensive.
 Variables should be normalized, or else higher range variables can bias the algorithm.
 Data still needs to be pre-processed.

 Example code KNN.txt


SUPPORT VECTOR MACHINES (SVM)
 SVM algorithm is a method of a classification algorithm in which
you plot raw data as points in an n-dimensional space (where n is
the number of features you have).
 The value of each feature is then tied to a particular coordinate,
making it easy to classify the data.
 Lines called classifiers can be used to split the data and plot them on
a graph.

 Example code SVM.txt


NAIVE BAYES
 A Naive Bayes classifier assumes that the presence of a particular feature in a
class is unrelated to the presence of any other feature.
 Even if these features are related to each other, a Naive Bayes classifier would
consider all of these properties independently when calculating the probability
of a particular outcome.
 A Naive Bayesian model is easy to build and useful for massive datasets. It's
simple and is known to outperform even highly sophisticated classification
methods.

Example code: Navie.txt


Dataset

You might also like