Chapter 5 Machine Learning
Chapter 5 Machine Learning
Chapter 5 Machine Learning
Today’s Artificial Intelligence (AI) has far surpassed the hype of blockchain
and quantum computing. This is due to the fact that huge computing
resources are easily available to the common man. The developers now take
advantage of this in creating new Machine Learning models and to re-train
the existing models for better performance and results. The easy availability
of High-Performance Computing (HPC) has resulted in a sudden increased
demand for IT professionals having Machine Learning skills.
1
There are several applications of AI that we use practically today. In fact,
each one of us use AI in many parts of our lives, even without our
knowledge. Today’s AI can perform extremely complex jobs with a great
accuracy and speed. Let us discuss an example of complex task to
understand what capabilities are expected in an AI application that you
would be developing today for your clients.
Example
We all use Google Directions during our trip anywhere in the city for a daily
commute or even for inter-city travels. Google Directions application
suggests the fastest path to our destination at that time instance. When we
follow this path, we have observed that Google is almost 100% right in its
suggestions and we save our valuable time on the trip.
In this section, let us discuss in detail what these statistical techniques are.
Statistical Techniques
The development of today’s AI applications started with using the age-old
traditional statistical techniques. You must have used straight-line
interpolation in schools to predict a future value. There are several other
such statistical techniques which are successfully applied in developing so-
called AI programs. We say “so-called” because the AI programs that we
2
have today are much more complex and use techniques far beyond the
statistical techniques used by the early AI programs.
Some of the examples of statistical techniques that are used for developing
AI applications in those days and are still in practice are listed here −
Regression
Classification
Clustering
Probability Theories
Decision Trees
Here we have listed only some primary techniques that are enough to get
you started on AI without scaring you of the vastness that AI demands. If you
are developing AI applications based on limited data, you would be using
these statistical techniques.
However, today the data is abundant. To analyze the kind of huge data that
we possess statistical techniques are of not much help as they have some
limitations of their own. More advanced methods such as deep learning are
hence developed to solve many complex problems.
3
After plotting various data points on the XY plot, we draw a best-fit line to do
our predictions for any other house given its size. You will feed the known
data to the machine and ask it to find the best fit line. Once the best fit line
is found by the machine, you will test its suitability by feeding in a known
house size, i.e. the Y-value in the above curve. The machine will now return
the estimated X-value, i.e. the expected price of the house. The diagram can
be extrapolated to find out the price of a house which is 3000 sq. ft. or even
larger. This is called regression in statistics. Particularly, this kind of
regression is called linear regression as the relationship between X & Y data
points is linear.
In many cases, the relationship between the X & Y data points may not be a
straight line, and it may be a curve with a complex equation. Your task would
be now to find out the best fitting curve which can be extrapolated to predict
the future values. One such application plot is shown in the figure below.
4
You will use the statistical optimization techniques to find out the equation
for the best fit curve here. And this is what exactly Machine Learning is
about. You use known optimization techniques to find the best solution to
your problem.
5
Machine learning evolved from left to right as shown in the above diagram.
Supervised Learning
Supervised learning is analogous to training a child to walk. You will hold the
child’s hand, show him how to take his foot forward, walk yourself for a
demonstration and so on, until the child learns to walk on his own.
Regression
6
Similarly, in the case of supervised learning, you give concrete known
examples to the computer. You say that for given feature value x1 the
output is y1, for x2 it is y2, for x3 it is y3, and so on. Based on this data, you
let the computer figure out an empirical relationship between x and y.
Once the machine is trained in this way with a sufficient number of data
points, now you would ask the machine to predict Y for a given X. Assuming
that you know the real value of Y for this given X, you will be able to deduce
whether the machine’s prediction is correct.
Thus, you will test whether the machine has learned by using the known test
data. Once you are satisfied that the machine is able to do the predictions
with a desired level of accuracy (say 80 to 90%) you can stop further training
the machine.
Now, you can safely use the machine to do the predictions on unknown data
points, or ask the machine to predict Y for a given X for which you do not
know the real value of Y. This training comes under the regression that we
talked about earlier.
Classification
You may also use machine learning techniques for classification problems. In
classification problems, you classify objects of similar nature into a single
group. For example, in a set of 100 students say, you may like to group them
into three groups based on their heights - short, medium and long.
Measuring the height of each student, you will place them in a proper group.
Now, when a new student comes in, you will put him in an appropriate group
by measuring his height. By following the principles in regression training,
you will train the machine to classify a student based on his feature – the
height. When the machine learns how the groups are formed, it will be able
to classify any unknown new student correctly. Once again, you would use
the test data to verify that the machine has learned your technique of
classification before putting the developed model in production.
Supervised Learning is where the AI really began its journey. This technique
was applied successfully in several cases. You have used this model while
doing the hand-written recognition on your machine. Several algorithms
have been developed for supervised learning. You will learn about them in
the following sections.
Unsupervised Learning
7
In unsupervised learning, we do not specify a target variable to the machine,
rather we ask machine “What can you tell me about X?”. More specifically,
we may ask questions such as given a huge data set X, “What are the five
best groups we can make out of X?” or “What features occur together most
frequently in X?”. To arrive at the answers to such questions, you can
understand that the number of data points that the machine would require to
deduce a strategy would be very large. In case of supervised learning, the
machine can be trained with even about few thousands of data points.
However, in case of unsupervised learning, the number of data points that is
reasonably accepted for learning starts in a few millions. These days, the
data is generally abundantly available. The data ideally requires curating.
However, the amount of data that is continuously flowing in a social area
network, in most cases data curation is an impossible task.
The following figure shows the boundary between the yellow and red dots as
determined by unsupervised machine learning. You can see it clearly that
the machine would be able to determine the class of each of the black dots
with a fairly good accuracy.
Reinforcement Learning
8
Consider training a pet dog, we train our pet to bring a ball to us. We throw
the ball at a certain distance and ask the dog to fetch it back to us. Every
time the dog does this right, we reward the dog. Slowly, the dog learns that
doing the job rightly gives him a reward and then the dog starts doing the
job right way every time in future. Exactly, this concept is applied in
“Reinforcement” type of learning. The technique was initially developed for
machines to play games. The machine is given an algorithm to analyze all
possible moves at each stage of the game. The machine may select one of
the moves at random. If the move is right, the machine is rewarded,
otherwise it may be penalized. Slowly, the machine will start differentiating
between right and wrong moves and after several iterations would learn to
solve the game puzzle with a better accuracy. The accuracy of winning the
game would improve as the machine plays more and more games.
Deep Learning
The deep learning is a model based on Artificial Neural Networks (ANN),
more specifically Convolutional Neural Networks (CNN)s. There are several
architectures used in deep learning such as deep neural networks, deep
9
belief networks, recurrent neural networks, and convolutional neural
networks.
So far, you have got a brief introduction to various machine learning models,
now let us explore slightly deeper into various algorithms that are available
under these models.
k-Nearest Neighbours
Decision Trees
Naive Bayes
Logistic Regression
Support Vector Machines
As we move ahead in this chapter, let us discuss in detail about each of the
algorithms.
10
k-Nearest Neighbours
The k-Nearest Neighbours, which is simply called kNN is a statistical
technique that can be used for solving for classification and regression
problems. Let us discuss the case of classifying an unknown object using
kNN. Consider the distribution of objects as shown in the image given below
−
The diagram shows three types of objects, marked in red, blue and green
colors. When you run the kNN classifier on the above dataset, the boundaries
for each type of object will be marked as shown below −
11
Now, consider a new unknown object that you want to classify as red, green
or blue. This is depicted in the figure below.
As you see it visually, the unknown data point belongs to a class of blue
objects. Mathematically, this can be concluded by measuring the distance of
this unknown point with every other point in the data set. When you do so,
you will know that most of its neighbours are of blue color. The average
12
distance to red and green objects would be definitely more than the average
distance to blue objects. Thus, this unknown object can be classified as
belonging to blue class.
The kNN algorithm can also be used for regression problems. The kNN
algorithm is available as ready-to-use in most of the ML libraries.
Decision Trees
A simple decision tree in a flowchart format is shown below −
You would write a code to classify your input data based on this flowchart.
The flowchart is self-explanatory and trivial. In this scenario, you are trying
to classify an incoming email to decide when to read it.
In reality, the decision trees can be large and complex. There are several
algorithms available to create and traverse these trees. As a Machine
Learning enthusiast, you need to understand and master these techniques of
creating and traversing decision trees.
13
Naive Bayes
Naive Bayes is used for creating classifiers. Suppose you want to sort out
(classify) fruits of different kinds from a fruit basket. You may use features
such as color, size and shape of a fruit, For example, any fruit that is red in
color, is round in shape and is about 10 cm in diameter may be considered
as Apple. So to train the model, you would use these features and test the
probability that a given feature matches the desired constraints. The
probabilities of different features are then combined to arrive at a probability
that a given fruit is an Apple. Naive Bayes generally requires a small number
of training data for classification.
Logistic Regression
Look at the following diagram. It shows the distribution of data points in XY
plane.
From the diagram, we can visually inspect the separation of red dots from
green dots. You may draw a boundary line to separate out these dots. Now,
to classify a new data point, you will just need to determine on which side of
the line the point lies.
14
Support Vector Machines
Look at the following distribution of data. Here the three classes of data
cannot be linearly separated. The boundary curves are non-linear. In such a
case, finding the equation of the curve becomes a complex job.
15
Source: https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html
The use of these algorithms is trivial and since these are well and field
tested, you can safely use them in your AI applications. Most of these
libraries are free to use even for commercial purposes.
16
and so many would vote for Y, and so on. Thus, in general, we are asking the
machine given a huge set of data points X, “What can you tell me about X?”.
Or it may be a question like “What are the five best groups we can make out
of X?”. Or it could be even like “What three features occur together most
frequently in X?”.
k-means clustering
The 2000 and 2004 Presidential elections in the United States were close —
very close. The largest percentage of the popular vote that any candidate
received was 50.7% and the lowest was 47.9%. If a percentage of the voters
were to have switched sides, the outcome of the election would have been
different. There are small groups of voters who, when properly appealed to,
will switch sides. These groups may not be huge, but with such close races,
they may be big enough to change the outcome of the election. How do you
find these groups of people? How do you appeal to them with a limited
budget? The answer is clustering.
Cluster Identification
17
Cluster identification tells an algorithm, “Here’s some data. Now group
similar things together and tell me about those groups.” The key difference
from classification is that in classification you know what you are looking for.
While that is not the case in clustering.
18
There is an input layer which has many sensors to collect data from the
outside world. On the right hand side, we have an output layer that gives us
the result predicted by the network. In between these two, several layers are
hidden. Each additional layer adds further complexity in training the
network, but would provide better results in most of the situations. There are
several types of architectures designed which we will discuss now.
ANN Architectures
The diagram below shows several ANN architectures developed over a period
of time and are in practice today.
19
20
Source:
https://towardsdatascience.com/the-mostly-complete-chart-of-neural-
networks-explained-3fb6f2367464
Applications
Deep Learning has shown a lot of success in several areas of machine
learning applications.
Mobile Apps − We use several web-based and mobile apps for organizing
our photos. Face detection, face ID, face tagging, identifying objects in an
image – all these use deep learning.
21
Agriculture is one such industry where people can apply deep learning
techniques to improve the crop yield.
Consumer finance is another area where machine learning can greatly
help in providing early detection on frauds and analyzing customer’s
ability to pay.
Deep learning techniques are also applied to the field of medicine to
create new drugs and provide a personalized prescription to a patient.
The possibilities are endless and one has to keep watching as the new ideas
and developments pop up frequently.
Now, we will look at some of the limitations of deep learning that we must
consider before using it in our machine learning application.
22
Why this is called a black-box approach is that you do not know why the
network came up with a certain result. You do not know how the network
concluded that it is a dog? Now consider a banking application where the
bank wants to decide the creditworthiness of a client. The network will
definitely provide you an answer to this question. However, will you be able
to justify it to a client? Banks need to explain it to their customers why the
loan is not sanctioned?
Duration of Development
The process of training a neural network is depicted in the diagram below −
You first define the problem that you want to solve, create a specification for
it, decide on the input features, design a network, deploy it and test the
output. If the output is not as expected, take this as a feedback to
restructure your network. This is an iterative process and may require
several iterations until the time network is fully trained to produce desired
outputs.
Amount of Data
The deep learning networks usually require a huge amount of data for
training, while the traditional machine learning algorithms can be used with
23
a great success even with just a few thousands of data points. Fortunately,
the data abundance is growing at 40% per year and CPU processing power is
growing at 20% per year as seen in the diagram given below −
Computationally Expensive
Training a neural network requires several times more computational power
than the one required in running traditional algorithms. Successful training of
deep Neural Networks may require several weeks of training time.
Statistics
Probability Theories
Calculus
24
Optimization techniques
Visualization
Mathematical Notation
Most of the machine learning algorithms are heavily based on mathematics.
The level of mathematics that you need to know is probably just a beginner
level. What is important is that you should be able to read the notation that
mathematicians use in their equations. For example - if you are able to read
the notation and comprehend what it means, you are ready for learning
machine learning. If not, you may need to brush up your mathematics
knowledge.
fAN(net−θ)=⎧⎩⎨γnet−θ−γifnet−θ≥ϵif−ϵ<net−θ<ϵifnet−θ≤
−ϵfAN(net−θ)={γifnet−θ≥ϵnet−θif−ϵ<net−θ<ϵ−γifnet−θ≤−ϵ
maxα[∑i=1mα−12∑i,j=1mlabel(i)⋅label(j)⋅ai⋅aj⟨x(i),x(j)⟩]maxα[∑i=1mα−
12∑i,j=1mlabel(i)⋅label(j)⋅ai⋅aj⟨x(i),x(j)⟩]
fAN(net−θ)=(eλ(net−θ)−e−λ(net−θ)eλ(net−θ)
+e−λ(net−θ))fAN(net−θ)=(eλ(net−θ)−e−λ(net−θ)eλ(net−θ)+e−λ(net−θ))
Probability Theory
Here is an example to test your current knowledge of probability theory:
Classifying with conditional probabilities.
p(ci|x,y)=p(x,y|ci)p(ci)p(x,y)p(ci|x,y)=p(x,y|ci)p(ci)p(x,y)
With these definitions, we can define the Bayesian classification rule −
Optimization Problem
Here is an optimization function
25
maxα[∑i=1mα−12∑i,j=1mlabel(i)⋅label(j)⋅ai⋅aj⟨x(i),x(j)⟩]maxα[∑i=1mα−
12∑i,j=1mlabel(i)⋅label(j)⋅ai⋅aj⟨x(i),x(j)⟩]
α≥0,and∑i−1mαi⋅label(i)=0α≥0,and∑i−1mαi⋅label(i)=0
If you can read and understand the above, you are all set.
Visualization
In many cases, you will need to understand the various types of visualization
plots to understand your data distribution and interpret the results of the
algorithm’s output.
Besides the above theoretical aspects of machine learning, you need good
programming skills to code those algorithms.
If you are developing the ML algorithm on your own, the following aspects
need to be understood carefully −
26
The language of your choice − this essentially is your proficiency in one of
the languages supported in ML development.
The IDE that you use − This would depend on your familiarity with the
existing IDEs and your comfort level.
Language Choice
Here is a list of languages that support ML development −
Python
R
Matlab
Octave
Julia
C++
C
IDEs
Here is a list of IDEs which support ML development −
R Studio
Pycharm
iPython/Jupyter Notebook
Julia
Spyder
Anaconda
Rodeo
Google –Colab
The above list is not essentially comprehensive. Each one has its own merits
and demerits. The reader is encouraged to try out these different IDEs before
narrowing down to a single one.
27
Platforms
Here is a list of platforms on which ML applications can be deployed −
IBM
Microsoft Azure
Google Cloud
Amazon
Mlflow
Once again this list is not exhaustive. The reader is encouraged to sign-up
for the abovementioned services and try them out themselves.
28