CH 5 Regularization
CH 5 Regularization
Unit-5: Regularization
Rohit Kumar
(Asistant Professor)
August 1, 2023
Outline
pkrmc
I said that a model has a low bias if it predicts well the labels
of the training data.
If the model makes many mistakes on the training data, we
say that the model has a high bias or that the model underfits.
Underfitting
So, underfitting is the inability of the model to predict well the
labels of the data it was trained on.
pkrmc
pkrmc
pkrmc
pkrmc
pkrmc
pkrmc
pkrmc
Regularization
Regularization encompasses the methods that force the
learning algorithm to build a less complex model.
In practice, that often leads to slightly higher bias but
significantly reduces the variance.
In the literature, this problem is known as the Bias-Variance
tradeoff.
The two most widely used types of regularization are called
L1-regularization and L2-regularization.
The idea is quite simple. To create a regularized model, we modify
the objective function by adding a penalizing term whose value is
pkrmc
higher when the model is more complex.
Rohit Kumar(Asistant Professor) P. K. Roy Memorial College, Dhanbad
Machine Learning
Regularization(cont...)
pkrmc